2026-03-29 01:36:11.153459 | Job console starting 2026-03-29 01:36:11.178788 | Updating git repos 2026-03-29 01:36:11.347064 | Cloning repos into workspace 2026-03-29 01:36:11.592007 | Restoring repo states 2026-03-29 01:36:11.617037 | Merging changes 2026-03-29 01:36:11.617070 | Checking out repos 2026-03-29 01:36:11.940537 | Preparing playbooks 2026-03-29 01:36:12.680914 | Running Ansible setup 2026-03-29 01:36:17.089721 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-29 01:36:17.843092 | 2026-03-29 01:36:17.843272 | PLAY [Base pre] 2026-03-29 01:36:17.860527 | 2026-03-29 01:36:17.860665 | TASK [Setup log path fact] 2026-03-29 01:36:17.890778 | orchestrator | ok 2026-03-29 01:36:17.908185 | 2026-03-29 01:36:17.908315 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 01:36:17.949516 | orchestrator | ok 2026-03-29 01:36:17.961620 | 2026-03-29 01:36:17.961732 | TASK [emit-job-header : Print job information] 2026-03-29 01:36:18.005704 | # Job Information 2026-03-29 01:36:18.005953 | Ansible Version: 2.16.14 2026-03-29 01:36:18.006015 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-29 01:36:18.006068 | Pipeline: periodic-midnight 2026-03-29 01:36:18.006105 | Executor: 521e9411259a 2026-03-29 01:36:18.006137 | Triggered by: https://github.com/osism/testbed 2026-03-29 01:36:18.006187 | Event ID: 8728361d0a6a491ab345cc1284af2839 2026-03-29 01:36:18.015101 | 2026-03-29 01:36:18.015249 | LOOP [emit-job-header : Print node information] 2026-03-29 01:36:18.140097 | orchestrator | ok: 2026-03-29 01:36:18.140462 | orchestrator | # Node Information 2026-03-29 01:36:18.140536 | orchestrator | Inventory Hostname: orchestrator 2026-03-29 01:36:18.140592 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-29 01:36:18.140638 | orchestrator | Username: zuul-testbed03 2026-03-29 01:36:18.140683 | orchestrator | Distro: Debian 12.13 2026-03-29 01:36:18.140733 | orchestrator | Provider: static-testbed 2026-03-29 01:36:18.140777 | orchestrator | Region: 2026-03-29 01:36:18.140822 | orchestrator | Label: testbed-orchestrator 2026-03-29 01:36:18.140864 | orchestrator | Product Name: OpenStack Nova 2026-03-29 01:36:18.140905 | orchestrator | Interface IP: 81.163.193.140 2026-03-29 01:36:18.168399 | 2026-03-29 01:36:18.168567 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-29 01:36:18.659582 | orchestrator -> localhost | changed 2026-03-29 01:36:18.674949 | 2026-03-29 01:36:18.675089 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-29 01:36:19.837125 | orchestrator -> localhost | changed 2026-03-29 01:36:19.862961 | 2026-03-29 01:36:19.863103 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-29 01:36:20.164072 | orchestrator -> localhost | ok 2026-03-29 01:36:20.181670 | 2026-03-29 01:36:20.181852 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-29 01:36:20.215409 | orchestrator | ok 2026-03-29 01:36:20.234772 | orchestrator | included: /var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-29 01:36:20.243423 | 2026-03-29 01:36:20.243529 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-29 01:36:22.513069 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-29 01:36:22.513636 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/8260414bd6014c3b8bec15592c50df7f_id_rsa 2026-03-29 01:36:22.513759 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/8260414bd6014c3b8bec15592c50df7f_id_rsa.pub 2026-03-29 01:36:22.513840 | orchestrator -> localhost | The key fingerprint is: 2026-03-29 01:36:22.513911 | orchestrator -> localhost | SHA256:kydxpMwI0Czd6VrhCbtaN4I8DdToQMe8cgxpkNC93gM zuul-build-sshkey 2026-03-29 01:36:22.513977 | orchestrator -> localhost | The key's randomart image is: 2026-03-29 01:36:22.514062 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-29 01:36:22.514128 | orchestrator -> localhost | |*+=X.. . . | 2026-03-29 01:36:22.514239 | orchestrator -> localhost | |o+*oB.++ o | 2026-03-29 01:36:22.514303 | orchestrator -> localhost | |.+o..*.o= . | 2026-03-29 01:36:22.514362 | orchestrator -> localhost | | .o+E = + | 2026-03-29 01:36:22.514419 | orchestrator -> localhost | | .o= * S . | 2026-03-29 01:36:22.514484 | orchestrator -> localhost | | + B = + | 2026-03-29 01:36:22.514543 | orchestrator -> localhost | | + o o | 2026-03-29 01:36:22.514601 | orchestrator -> localhost | | . | 2026-03-29 01:36:22.514660 | orchestrator -> localhost | | | 2026-03-29 01:36:22.514720 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-29 01:36:22.514915 | orchestrator -> localhost | ok: Runtime: 0:00:01.767174 2026-03-29 01:36:22.528543 | 2026-03-29 01:36:22.528746 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-29 01:36:22.563599 | orchestrator | ok 2026-03-29 01:36:22.576790 | orchestrator | included: /var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-29 01:36:22.586426 | 2026-03-29 01:36:22.586649 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-29 01:36:22.613062 | orchestrator | skipping: Conditional result was False 2026-03-29 01:36:22.630747 | 2026-03-29 01:36:22.630938 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-29 01:36:23.233094 | orchestrator | changed 2026-03-29 01:36:23.242170 | 2026-03-29 01:36:23.242303 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-29 01:36:23.536746 | orchestrator | ok 2026-03-29 01:36:23.547277 | 2026-03-29 01:36:23.547425 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-29 01:36:23.961298 | orchestrator | ok 2026-03-29 01:36:23.969961 | 2026-03-29 01:36:23.970110 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-29 01:36:24.404968 | orchestrator | ok 2026-03-29 01:36:24.411580 | 2026-03-29 01:36:24.411690 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-29 01:36:24.435327 | orchestrator | skipping: Conditional result was False 2026-03-29 01:36:24.442125 | 2026-03-29 01:36:24.442278 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-29 01:36:24.882480 | orchestrator -> localhost | changed 2026-03-29 01:36:24.917924 | 2026-03-29 01:36:24.918259 | TASK [add-build-sshkey : Add back temp key] 2026-03-29 01:36:25.315634 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/8260414bd6014c3b8bec15592c50df7f_id_rsa (zuul-build-sshkey) 2026-03-29 01:36:25.316104 | orchestrator -> localhost | ok: Runtime: 0:00:00.027305 2026-03-29 01:36:25.329611 | 2026-03-29 01:36:25.329768 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-29 01:36:25.782206 | orchestrator | ok 2026-03-29 01:36:25.791911 | 2026-03-29 01:36:25.792046 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-29 01:36:25.826982 | orchestrator | skipping: Conditional result was False 2026-03-29 01:36:25.885653 | 2026-03-29 01:36:25.885882 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-29 01:36:26.341074 | orchestrator | ok 2026-03-29 01:36:26.355989 | 2026-03-29 01:36:26.356109 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-29 01:36:26.396108 | orchestrator | ok 2026-03-29 01:36:26.403762 | 2026-03-29 01:36:26.403873 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-29 01:36:26.712010 | orchestrator -> localhost | ok 2026-03-29 01:36:26.729348 | 2026-03-29 01:36:26.729501 | TASK [validate-host : Collect information about the host] 2026-03-29 01:36:28.032442 | orchestrator | ok 2026-03-29 01:36:28.048319 | 2026-03-29 01:36:28.048450 | TASK [validate-host : Sanitize hostname] 2026-03-29 01:36:28.115244 | orchestrator | ok 2026-03-29 01:36:28.126546 | 2026-03-29 01:36:28.126717 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-29 01:36:28.720400 | orchestrator -> localhost | changed 2026-03-29 01:36:28.734385 | 2026-03-29 01:36:28.734560 | TASK [validate-host : Collect information about zuul worker] 2026-03-29 01:36:29.228746 | orchestrator | ok 2026-03-29 01:36:29.237607 | 2026-03-29 01:36:29.237755 | TASK [validate-host : Write out all zuul information for each host] 2026-03-29 01:36:29.831353 | orchestrator -> localhost | changed 2026-03-29 01:36:29.852071 | 2026-03-29 01:36:29.852270 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-29 01:36:30.179925 | orchestrator | ok 2026-03-29 01:36:30.189840 | 2026-03-29 01:36:30.189993 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-29 01:36:56.188006 | orchestrator | changed: 2026-03-29 01:36:56.188266 | orchestrator | .d..t...... src/ 2026-03-29 01:36:56.188304 | orchestrator | .d..t...... src/github.com/ 2026-03-29 01:36:56.188329 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-29 01:36:56.188351 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-29 01:36:56.188372 | orchestrator | RedHat.yml 2026-03-29 01:36:56.202684 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-29 01:36:56.202701 | orchestrator | RedHat.yml 2026-03-29 01:36:56.202753 | orchestrator | = 2.2.0"... 2026-03-29 01:37:07.804351 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-29 01:37:07.820160 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-29 01:37:08.265122 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-29 01:37:09.183007 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 01:37:09.569834 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-29 01:37:10.246074 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 01:37:10.622443 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-29 01:37:11.666488 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-29 01:37:11.666562 | orchestrator | 2026-03-29 01:37:11.666582 | orchestrator | Providers are signed by their developers. 2026-03-29 01:37:11.666598 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-29 01:37:11.666614 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-29 01:37:11.666632 | orchestrator | 2026-03-29 01:37:11.666646 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-29 01:37:11.666671 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-29 01:37:11.666680 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-29 01:37:11.666688 | orchestrator | you run "tofu init" in the future. 2026-03-29 01:37:11.666704 | orchestrator | 2026-03-29 01:37:11.666716 | orchestrator | OpenTofu has been successfully initialized! 2026-03-29 01:37:11.666734 | orchestrator | 2026-03-29 01:37:11.666748 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-29 01:37:11.666756 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-29 01:37:11.666764 | orchestrator | should now work. 2026-03-29 01:37:11.666773 | orchestrator | 2026-03-29 01:37:11.666781 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-29 01:37:11.666789 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-29 01:37:11.666796 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-29 01:37:11.841871 | orchestrator | Created and switched to workspace "ci"! 2026-03-29 01:37:11.841956 | orchestrator | 2026-03-29 01:37:11.841973 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-29 01:37:11.841986 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-29 01:37:11.841999 | orchestrator | for this configuration. 2026-03-29 01:37:11.973776 | orchestrator | ci.auto.tfvars 2026-03-29 01:37:11.978938 | orchestrator | default_custom.tf 2026-03-29 01:37:12.898988 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-29 01:37:13.468561 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-29 01:37:13.657909 | orchestrator | 2026-03-29 01:37:13.657954 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-29 01:37:13.657961 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-29 01:37:13.657965 | orchestrator | + create 2026-03-29 01:37:13.657970 | orchestrator | <= read (data resources) 2026-03-29 01:37:13.657975 | orchestrator | 2026-03-29 01:37:13.657980 | orchestrator | OpenTofu will perform the following actions: 2026-03-29 01:37:13.657990 | orchestrator | 2026-03-29 01:37:13.657994 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-29 01:37:13.657999 | orchestrator | # (config refers to values not yet known) 2026-03-29 01:37:13.658003 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-29 01:37:13.658007 | orchestrator | + checksum = (known after apply) 2026-03-29 01:37:13.658012 | orchestrator | + created_at = (known after apply) 2026-03-29 01:37:13.658029 | orchestrator | + file = (known after apply) 2026-03-29 01:37:13.658034 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658051 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658056 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 01:37:13.658060 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 01:37:13.658064 | orchestrator | + most_recent = true 2026-03-29 01:37:13.658068 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.658072 | orchestrator | + protected = (known after apply) 2026-03-29 01:37:13.658077 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658083 | orchestrator | + schema = (known after apply) 2026-03-29 01:37:13.658087 | orchestrator | + size_bytes = (known after apply) 2026-03-29 01:37:13.658091 | orchestrator | + tags = (known after apply) 2026-03-29 01:37:13.658095 | orchestrator | + updated_at = (known after apply) 2026-03-29 01:37:13.658099 | orchestrator | } 2026-03-29 01:37:13.658106 | orchestrator | 2026-03-29 01:37:13.658110 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-29 01:37:13.658114 | orchestrator | # (config refers to values not yet known) 2026-03-29 01:37:13.658118 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-29 01:37:13.658122 | orchestrator | + checksum = (known after apply) 2026-03-29 01:37:13.658126 | orchestrator | + created_at = (known after apply) 2026-03-29 01:37:13.658131 | orchestrator | + file = (known after apply) 2026-03-29 01:37:13.658135 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658139 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658143 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 01:37:13.658147 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 01:37:13.658151 | orchestrator | + most_recent = true 2026-03-29 01:37:13.658155 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.658159 | orchestrator | + protected = (known after apply) 2026-03-29 01:37:13.658163 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658167 | orchestrator | + schema = (known after apply) 2026-03-29 01:37:13.658171 | orchestrator | + size_bytes = (known after apply) 2026-03-29 01:37:13.658175 | orchestrator | + tags = (known after apply) 2026-03-29 01:37:13.658179 | orchestrator | + updated_at = (known after apply) 2026-03-29 01:37:13.658184 | orchestrator | } 2026-03-29 01:37:13.658188 | orchestrator | 2026-03-29 01:37:13.658192 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-29 01:37:13.658196 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-29 01:37:13.658200 | orchestrator | + content = (known after apply) 2026-03-29 01:37:13.658205 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 01:37:13.658209 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 01:37:13.658213 | orchestrator | + content_md5 = (known after apply) 2026-03-29 01:37:13.658217 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 01:37:13.658221 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 01:37:13.658225 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 01:37:13.658229 | orchestrator | + directory_permission = "0777" 2026-03-29 01:37:13.658233 | orchestrator | + file_permission = "0644" 2026-03-29 01:37:13.658237 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-29 01:37:13.658241 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658245 | orchestrator | } 2026-03-29 01:37:13.658251 | orchestrator | 2026-03-29 01:37:13.658255 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-29 01:37:13.658259 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-29 01:37:13.658263 | orchestrator | + content = (known after apply) 2026-03-29 01:37:13.658267 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 01:37:13.658271 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 01:37:13.658275 | orchestrator | + content_md5 = (known after apply) 2026-03-29 01:37:13.658279 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 01:37:13.658284 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 01:37:13.658292 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 01:37:13.658297 | orchestrator | + directory_permission = "0777" 2026-03-29 01:37:13.658301 | orchestrator | + file_permission = "0644" 2026-03-29 01:37:13.658308 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-29 01:37:13.658312 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658316 | orchestrator | } 2026-03-29 01:37:13.658320 | orchestrator | 2026-03-29 01:37:13.658325 | orchestrator | # local_file.inventory will be created 2026-03-29 01:37:13.658329 | orchestrator | + resource "local_file" "inventory" { 2026-03-29 01:37:13.658333 | orchestrator | + content = (known after apply) 2026-03-29 01:37:13.658337 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 01:37:13.658341 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 01:37:13.658345 | orchestrator | + content_md5 = (known after apply) 2026-03-29 01:37:13.658349 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 01:37:13.658363 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 01:37:13.658367 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 01:37:13.658371 | orchestrator | + directory_permission = "0777" 2026-03-29 01:37:13.658375 | orchestrator | + file_permission = "0644" 2026-03-29 01:37:13.658387 | orchestrator | + filename = "inventory.ci" 2026-03-29 01:37:13.658392 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658396 | orchestrator | } 2026-03-29 01:37:13.658400 | orchestrator | 2026-03-29 01:37:13.658404 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-29 01:37:13.658408 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-29 01:37:13.658412 | orchestrator | + content = (sensitive value) 2026-03-29 01:37:13.658417 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 01:37:13.658421 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 01:37:13.658425 | orchestrator | + content_md5 = (known after apply) 2026-03-29 01:37:13.658429 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 01:37:13.658433 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 01:37:13.658437 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 01:37:13.658441 | orchestrator | + directory_permission = "0700" 2026-03-29 01:37:13.658445 | orchestrator | + file_permission = "0600" 2026-03-29 01:37:13.658449 | orchestrator | + filename = ".id_rsa.ci" 2026-03-29 01:37:13.658453 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658457 | orchestrator | } 2026-03-29 01:37:13.658462 | orchestrator | 2026-03-29 01:37:13.658466 | orchestrator | # null_resource.node_semaphore will be created 2026-03-29 01:37:13.658470 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-29 01:37:13.658474 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658478 | orchestrator | } 2026-03-29 01:37:13.658484 | orchestrator | 2026-03-29 01:37:13.658488 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-29 01:37:13.658492 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-29 01:37:13.658496 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658500 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658505 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658509 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658513 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658517 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-29 01:37:13.658521 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658525 | orchestrator | + size = 80 2026-03-29 01:37:13.658529 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658533 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658537 | orchestrator | } 2026-03-29 01:37:13.658541 | orchestrator | 2026-03-29 01:37:13.658545 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-29 01:37:13.658549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658554 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658558 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658562 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658569 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658573 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658577 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-29 01:37:13.658581 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658585 | orchestrator | + size = 80 2026-03-29 01:37:13.658589 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658594 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658598 | orchestrator | } 2026-03-29 01:37:13.658602 | orchestrator | 2026-03-29 01:37:13.658606 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-29 01:37:13.658610 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658614 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658618 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658622 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658626 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658631 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658635 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-29 01:37:13.658639 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658643 | orchestrator | + size = 80 2026-03-29 01:37:13.658647 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658651 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658655 | orchestrator | } 2026-03-29 01:37:13.658659 | orchestrator | 2026-03-29 01:37:13.658663 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-29 01:37:13.658667 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658671 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658676 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658680 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658684 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658688 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658692 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-29 01:37:13.658696 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658700 | orchestrator | + size = 80 2026-03-29 01:37:13.658706 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658710 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658715 | orchestrator | } 2026-03-29 01:37:13.658719 | orchestrator | 2026-03-29 01:37:13.658723 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-29 01:37:13.658727 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658731 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658735 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658739 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658743 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658747 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658751 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-29 01:37:13.658755 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658760 | orchestrator | + size = 80 2026-03-29 01:37:13.658764 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658768 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658772 | orchestrator | } 2026-03-29 01:37:13.658777 | orchestrator | 2026-03-29 01:37:13.658782 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-29 01:37:13.658786 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658790 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658794 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658798 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658805 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658809 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658813 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-29 01:37:13.658817 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658821 | orchestrator | + size = 80 2026-03-29 01:37:13.658825 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658829 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658834 | orchestrator | } 2026-03-29 01:37:13.658838 | orchestrator | 2026-03-29 01:37:13.658842 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-29 01:37:13.658846 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 01:37:13.658850 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658854 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658858 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658862 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.658866 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658870 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-29 01:37:13.658874 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658878 | orchestrator | + size = 80 2026-03-29 01:37:13.658883 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658887 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658891 | orchestrator | } 2026-03-29 01:37:13.658895 | orchestrator | 2026-03-29 01:37:13.658899 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-29 01:37:13.658903 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.658907 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658911 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658915 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658919 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658924 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-29 01:37:13.658928 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658932 | orchestrator | + size = 20 2026-03-29 01:37:13.658936 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658940 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658944 | orchestrator | } 2026-03-29 01:37:13.658948 | orchestrator | 2026-03-29 01:37:13.658952 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-29 01:37:13.658957 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.658961 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.658965 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.658969 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.658973 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.658977 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-29 01:37:13.658981 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.658985 | orchestrator | + size = 20 2026-03-29 01:37:13.658989 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.658993 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.658997 | orchestrator | } 2026-03-29 01:37:13.659001 | orchestrator | 2026-03-29 01:37:13.659005 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-29 01:37:13.659009 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659013 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659017 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659022 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659026 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659030 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-29 01:37:13.659034 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659041 | orchestrator | + size = 20 2026-03-29 01:37:13.659045 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659049 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659053 | orchestrator | } 2026-03-29 01:37:13.659057 | orchestrator | 2026-03-29 01:37:13.659061 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-29 01:37:13.659065 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659069 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659073 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659077 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659083 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659088 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-29 01:37:13.659092 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659096 | orchestrator | + size = 20 2026-03-29 01:37:13.659100 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659104 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659108 | orchestrator | } 2026-03-29 01:37:13.659112 | orchestrator | 2026-03-29 01:37:13.659116 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-29 01:37:13.659120 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659125 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659129 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659133 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659137 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659141 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-29 01:37:13.659145 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659149 | orchestrator | + size = 20 2026-03-29 01:37:13.659153 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659157 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659161 | orchestrator | } 2026-03-29 01:37:13.659167 | orchestrator | 2026-03-29 01:37:13.659171 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-29 01:37:13.659175 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659180 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659184 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659188 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659192 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659196 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-29 01:37:13.659200 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659204 | orchestrator | + size = 20 2026-03-29 01:37:13.659208 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659212 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659216 | orchestrator | } 2026-03-29 01:37:13.659220 | orchestrator | 2026-03-29 01:37:13.659224 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-29 01:37:13.659228 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659233 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659237 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659241 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659245 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659249 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-29 01:37:13.659253 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659257 | orchestrator | + size = 20 2026-03-29 01:37:13.659261 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659265 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659269 | orchestrator | } 2026-03-29 01:37:13.659273 | orchestrator | 2026-03-29 01:37:13.659278 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-29 01:37:13.659282 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659289 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659293 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659297 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659301 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659305 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-29 01:37:13.659309 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659313 | orchestrator | + size = 20 2026-03-29 01:37:13.659317 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659321 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659325 | orchestrator | } 2026-03-29 01:37:13.659330 | orchestrator | 2026-03-29 01:37:13.659334 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-29 01:37:13.659338 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 01:37:13.659342 | orchestrator | + attachment = (known after apply) 2026-03-29 01:37:13.659346 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659350 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659354 | orchestrator | + metadata = (known after apply) 2026-03-29 01:37:13.659358 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-29 01:37:13.659362 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659366 | orchestrator | + size = 20 2026-03-29 01:37:13.659370 | orchestrator | + volume_retype_policy = "never" 2026-03-29 01:37:13.659374 | orchestrator | + volume_type = "ssd" 2026-03-29 01:37:13.659399 | orchestrator | } 2026-03-29 01:37:13.659404 | orchestrator | 2026-03-29 01:37:13.659408 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-29 01:37:13.659412 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-29 01:37:13.659416 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.659420 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.659425 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.659429 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.659433 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659436 | orchestrator | + config_drive = true 2026-03-29 01:37:13.659442 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.659446 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.659450 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-29 01:37:13.659454 | orchestrator | + force_delete = false 2026-03-29 01:37:13.659457 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.659461 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659465 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.659469 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.659472 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.659476 | orchestrator | + name = "testbed-manager" 2026-03-29 01:37:13.659480 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.659484 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659487 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.659491 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.659495 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.659499 | orchestrator | + user_data = (sensitive value) 2026-03-29 01:37:13.659502 | orchestrator | 2026-03-29 01:37:13.659506 | orchestrator | + block_device { 2026-03-29 01:37:13.659510 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.659514 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.659518 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.659521 | orchestrator | + multiattach = false 2026-03-29 01:37:13.659525 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.659529 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659535 | orchestrator | } 2026-03-29 01:37:13.659539 | orchestrator | 2026-03-29 01:37:13.659543 | orchestrator | + network { 2026-03-29 01:37:13.659547 | orchestrator | + access_network = false 2026-03-29 01:37:13.659550 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.659554 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.659558 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.659562 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.659565 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.659569 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659573 | orchestrator | } 2026-03-29 01:37:13.659577 | orchestrator | } 2026-03-29 01:37:13.659582 | orchestrator | 2026-03-29 01:37:13.659586 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-29 01:37:13.659590 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.659594 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.659598 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.659601 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.659605 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.659609 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659613 | orchestrator | + config_drive = true 2026-03-29 01:37:13.659616 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.659620 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.659624 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.659627 | orchestrator | + force_delete = false 2026-03-29 01:37:13.659631 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.659635 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659639 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.659642 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.659646 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.659650 | orchestrator | + name = "testbed-node-0" 2026-03-29 01:37:13.659654 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.659657 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659661 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.659665 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.659669 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.659672 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.659676 | orchestrator | 2026-03-29 01:37:13.659680 | orchestrator | + block_device { 2026-03-29 01:37:13.659684 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.659687 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.659691 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.659695 | orchestrator | + multiattach = false 2026-03-29 01:37:13.659699 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.659702 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659706 | orchestrator | } 2026-03-29 01:37:13.659710 | orchestrator | 2026-03-29 01:37:13.659714 | orchestrator | + network { 2026-03-29 01:37:13.659718 | orchestrator | + access_network = false 2026-03-29 01:37:13.659721 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.659725 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.659729 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.659732 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.659736 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.659740 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659744 | orchestrator | } 2026-03-29 01:37:13.659747 | orchestrator | } 2026-03-29 01:37:13.659751 | orchestrator | 2026-03-29 01:37:13.659755 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-29 01:37:13.659759 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.659763 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.659769 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.659772 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.659776 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.659780 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659784 | orchestrator | + config_drive = true 2026-03-29 01:37:13.659787 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.659791 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.659795 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.659798 | orchestrator | + force_delete = false 2026-03-29 01:37:13.659802 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.659806 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659810 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.659813 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.659817 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.659821 | orchestrator | + name = "testbed-node-1" 2026-03-29 01:37:13.659825 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.659828 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.659832 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.659836 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.659840 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.659845 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.659849 | orchestrator | 2026-03-29 01:37:13.659853 | orchestrator | + block_device { 2026-03-29 01:37:13.659857 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.659861 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.659865 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.659868 | orchestrator | + multiattach = false 2026-03-29 01:37:13.659872 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.659876 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659879 | orchestrator | } 2026-03-29 01:37:13.659883 | orchestrator | 2026-03-29 01:37:13.659887 | orchestrator | + network { 2026-03-29 01:37:13.659891 | orchestrator | + access_network = false 2026-03-29 01:37:13.659895 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.659898 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.659902 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.659906 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.659910 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.659913 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.659917 | orchestrator | } 2026-03-29 01:37:13.659921 | orchestrator | } 2026-03-29 01:37:13.659925 | orchestrator | 2026-03-29 01:37:13.659928 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-29 01:37:13.659932 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.659936 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.659940 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.659943 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.659947 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.659951 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.659955 | orchestrator | + config_drive = true 2026-03-29 01:37:13.659961 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.659965 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.659969 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.659972 | orchestrator | + force_delete = false 2026-03-29 01:37:13.659976 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.659980 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.659984 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.659990 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.659993 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.659997 | orchestrator | + name = "testbed-node-2" 2026-03-29 01:37:13.660001 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.660005 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660008 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.660012 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.660016 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.660020 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.660023 | orchestrator | 2026-03-29 01:37:13.660027 | orchestrator | + block_device { 2026-03-29 01:37:13.660031 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.660035 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.660038 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.660042 | orchestrator | + multiattach = false 2026-03-29 01:37:13.660046 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.660049 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660053 | orchestrator | } 2026-03-29 01:37:13.660057 | orchestrator | 2026-03-29 01:37:13.660061 | orchestrator | + network { 2026-03-29 01:37:13.660064 | orchestrator | + access_network = false 2026-03-29 01:37:13.660068 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.660072 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.660076 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.660079 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.660083 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.660087 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660091 | orchestrator | } 2026-03-29 01:37:13.660094 | orchestrator | } 2026-03-29 01:37:13.660098 | orchestrator | 2026-03-29 01:37:13.660106 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-29 01:37:13.660110 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.660114 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.660117 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.660121 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.660125 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.660129 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.660132 | orchestrator | + config_drive = true 2026-03-29 01:37:13.660136 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.660140 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.660144 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.660147 | orchestrator | + force_delete = false 2026-03-29 01:37:13.660151 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.660155 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660158 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.660162 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.660166 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.660170 | orchestrator | + name = "testbed-node-3" 2026-03-29 01:37:13.660173 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.660177 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660181 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.660184 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.660188 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.660192 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.660196 | orchestrator | 2026-03-29 01:37:13.660200 | orchestrator | + block_device { 2026-03-29 01:37:13.660203 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.660207 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.660211 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.660245 | orchestrator | + multiattach = false 2026-03-29 01:37:13.660249 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.660253 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660257 | orchestrator | } 2026-03-29 01:37:13.660260 | orchestrator | 2026-03-29 01:37:13.660264 | orchestrator | + network { 2026-03-29 01:37:13.660268 | orchestrator | + access_network = false 2026-03-29 01:37:13.660272 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.660275 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.660279 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.660283 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.660286 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.660290 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660294 | orchestrator | } 2026-03-29 01:37:13.660298 | orchestrator | } 2026-03-29 01:37:13.660301 | orchestrator | 2026-03-29 01:37:13.660305 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-29 01:37:13.660309 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.660313 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.660316 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.660320 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.660324 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.660328 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.660331 | orchestrator | + config_drive = true 2026-03-29 01:37:13.660335 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.660339 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.660342 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.660346 | orchestrator | + force_delete = false 2026-03-29 01:37:13.660350 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.660354 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660357 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.660361 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.660365 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.660368 | orchestrator | + name = "testbed-node-4" 2026-03-29 01:37:13.660374 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.660385 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660389 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.660392 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.660396 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.660400 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.660404 | orchestrator | 2026-03-29 01:37:13.660408 | orchestrator | + block_device { 2026-03-29 01:37:13.660411 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.660415 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.660419 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.660423 | orchestrator | + multiattach = false 2026-03-29 01:37:13.660426 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.660430 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660434 | orchestrator | } 2026-03-29 01:37:13.660438 | orchestrator | 2026-03-29 01:37:13.660442 | orchestrator | + network { 2026-03-29 01:37:13.660445 | orchestrator | + access_network = false 2026-03-29 01:37:13.660449 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.660453 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.660456 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.660460 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.660464 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.660468 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660472 | orchestrator | } 2026-03-29 01:37:13.660475 | orchestrator | } 2026-03-29 01:37:13.660482 | orchestrator | 2026-03-29 01:37:13.660486 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-29 01:37:13.660490 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 01:37:13.660494 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 01:37:13.660497 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 01:37:13.660501 | orchestrator | + all_metadata = (known after apply) 2026-03-29 01:37:13.660505 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.660509 | orchestrator | + availability_zone = "nova" 2026-03-29 01:37:13.660512 | orchestrator | + config_drive = true 2026-03-29 01:37:13.660516 | orchestrator | + created = (known after apply) 2026-03-29 01:37:13.660520 | orchestrator | + flavor_id = (known after apply) 2026-03-29 01:37:13.660524 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 01:37:13.660527 | orchestrator | + force_delete = false 2026-03-29 01:37:13.660531 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 01:37:13.660535 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660539 | orchestrator | + image_id = (known after apply) 2026-03-29 01:37:13.660543 | orchestrator | + image_name = (known after apply) 2026-03-29 01:37:13.660546 | orchestrator | + key_pair = "testbed" 2026-03-29 01:37:13.660550 | orchestrator | + name = "testbed-node-5" 2026-03-29 01:37:13.660554 | orchestrator | + power_state = "active" 2026-03-29 01:37:13.660557 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660561 | orchestrator | + security_groups = (known after apply) 2026-03-29 01:37:13.660565 | orchestrator | + stop_before_destroy = false 2026-03-29 01:37:13.660569 | orchestrator | + updated = (known after apply) 2026-03-29 01:37:13.660572 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 01:37:13.660576 | orchestrator | 2026-03-29 01:37:13.660580 | orchestrator | + block_device { 2026-03-29 01:37:13.660584 | orchestrator | + boot_index = 0 2026-03-29 01:37:13.660588 | orchestrator | + delete_on_termination = false 2026-03-29 01:37:13.660591 | orchestrator | + destination_type = "volume" 2026-03-29 01:37:13.660595 | orchestrator | + multiattach = false 2026-03-29 01:37:13.660599 | orchestrator | + source_type = "volume" 2026-03-29 01:37:13.660602 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660606 | orchestrator | } 2026-03-29 01:37:13.660610 | orchestrator | 2026-03-29 01:37:13.660614 | orchestrator | + network { 2026-03-29 01:37:13.660617 | orchestrator | + access_network = false 2026-03-29 01:37:13.660621 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 01:37:13.660625 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 01:37:13.660629 | orchestrator | + mac = (known after apply) 2026-03-29 01:37:13.660633 | orchestrator | + name = (known after apply) 2026-03-29 01:37:13.660636 | orchestrator | + port = (known after apply) 2026-03-29 01:37:13.660640 | orchestrator | + uuid = (known after apply) 2026-03-29 01:37:13.660644 | orchestrator | } 2026-03-29 01:37:13.660648 | orchestrator | } 2026-03-29 01:37:13.660651 | orchestrator | 2026-03-29 01:37:13.660655 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-29 01:37:13.660659 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-29 01:37:13.660663 | orchestrator | + fingerprint = (known after apply) 2026-03-29 01:37:13.660667 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660670 | orchestrator | + name = "testbed" 2026-03-29 01:37:13.660674 | orchestrator | + private_key = (sensitive value) 2026-03-29 01:37:13.660678 | orchestrator | + public_key = (known after apply) 2026-03-29 01:37:13.660682 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660685 | orchestrator | + user_id = (known after apply) 2026-03-29 01:37:13.660689 | orchestrator | } 2026-03-29 01:37:13.660693 | orchestrator | 2026-03-29 01:37:13.660697 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-29 01:37:13.660700 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660707 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660710 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660714 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660718 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660724 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660728 | orchestrator | } 2026-03-29 01:37:13.660732 | orchestrator | 2026-03-29 01:37:13.660736 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-29 01:37:13.660740 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660743 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660747 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660751 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660755 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660758 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660762 | orchestrator | } 2026-03-29 01:37:13.660766 | orchestrator | 2026-03-29 01:37:13.660770 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-29 01:37:13.660774 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660779 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660783 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660787 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660791 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660795 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660798 | orchestrator | } 2026-03-29 01:37:13.660802 | orchestrator | 2026-03-29 01:37:13.660806 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-29 01:37:13.660810 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660813 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660817 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660821 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660825 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660828 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660832 | orchestrator | } 2026-03-29 01:37:13.660836 | orchestrator | 2026-03-29 01:37:13.660840 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-29 01:37:13.660843 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660847 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660851 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660855 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660858 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660862 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660866 | orchestrator | } 2026-03-29 01:37:13.660870 | orchestrator | 2026-03-29 01:37:13.660874 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-29 01:37:13.660877 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660881 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660885 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660889 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660892 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660896 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660900 | orchestrator | } 2026-03-29 01:37:13.660904 | orchestrator | 2026-03-29 01:37:13.660907 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-29 01:37:13.660911 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660915 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660919 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660923 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660926 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660932 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660936 | orchestrator | } 2026-03-29 01:37:13.660940 | orchestrator | 2026-03-29 01:37:13.660944 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-29 01:37:13.660948 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660951 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660955 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660959 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660963 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.660966 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.660970 | orchestrator | } 2026-03-29 01:37:13.660974 | orchestrator | 2026-03-29 01:37:13.660978 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-29 01:37:13.660982 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 01:37:13.660985 | orchestrator | + device = (known after apply) 2026-03-29 01:37:13.660989 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.660993 | orchestrator | + instance_id = (known after apply) 2026-03-29 01:37:13.660996 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661000 | orchestrator | + volume_id = (known after apply) 2026-03-29 01:37:13.661004 | orchestrator | } 2026-03-29 01:37:13.661008 | orchestrator | 2026-03-29 01:37:13.661011 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-29 01:37:13.661016 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-29 01:37:13.661019 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 01:37:13.661023 | orchestrator | + floating_ip = (known after apply) 2026-03-29 01:37:13.661027 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661031 | orchestrator | + port_id = (known after apply) 2026-03-29 01:37:13.661034 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661038 | orchestrator | } 2026-03-29 01:37:13.661042 | orchestrator | 2026-03-29 01:37:13.661046 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-29 01:37:13.661050 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-29 01:37:13.661053 | orchestrator | + address = (known after apply) 2026-03-29 01:37:13.661057 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661063 | orchestrator | + dns_domain = (known after apply) 2026-03-29 01:37:13.661067 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661071 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 01:37:13.661074 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661078 | orchestrator | + pool = "public" 2026-03-29 01:37:13.661082 | orchestrator | + port_id = (known after apply) 2026-03-29 01:37:13.661086 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661089 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661093 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661097 | orchestrator | } 2026-03-29 01:37:13.661101 | orchestrator | 2026-03-29 01:37:13.661104 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-29 01:37:13.661108 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-29 01:37:13.661112 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661116 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661120 | orchestrator | + availability_zone_hints = [ 2026-03-29 01:37:13.661123 | orchestrator | + "nova", 2026-03-29 01:37:13.661127 | orchestrator | ] 2026-03-29 01:37:13.661131 | orchestrator | + dns_domain = (known after apply) 2026-03-29 01:37:13.661135 | orchestrator | + external = (known after apply) 2026-03-29 01:37:13.661139 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661142 | orchestrator | + mtu = (known after apply) 2026-03-29 01:37:13.661146 | orchestrator | + name = "net-testbed-management" 2026-03-29 01:37:13.661152 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661158 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661162 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661165 | orchestrator | + shared = (known after apply) 2026-03-29 01:37:13.661169 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661173 | orchestrator | + transparent_vlan = (known after apply) 2026-03-29 01:37:13.661177 | orchestrator | 2026-03-29 01:37:13.661181 | orchestrator | + segments (known after apply) 2026-03-29 01:37:13.661184 | orchestrator | } 2026-03-29 01:37:13.661188 | orchestrator | 2026-03-29 01:37:13.661192 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-29 01:37:13.661196 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-29 01:37:13.661199 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661203 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661207 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661211 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661214 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661218 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661222 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661226 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661229 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661233 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661237 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661241 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661244 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661248 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661252 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661256 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661259 | orchestrator | 2026-03-29 01:37:13.661263 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661267 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661271 | orchestrator | } 2026-03-29 01:37:13.661274 | orchestrator | 2026-03-29 01:37:13.661278 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.661282 | orchestrator | 2026-03-29 01:37:13.661286 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.661290 | orchestrator | + ip_address = "192.168.16.5" 2026-03-29 01:37:13.661294 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661297 | orchestrator | } 2026-03-29 01:37:13.661301 | orchestrator | } 2026-03-29 01:37:13.661305 | orchestrator | 2026-03-29 01:37:13.661309 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-29 01:37:13.661312 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.661316 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661320 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661324 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661327 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661331 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661335 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661339 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661342 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661346 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661350 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661354 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661357 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661361 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661365 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661371 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661375 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661384 | orchestrator | 2026-03-29 01:37:13.661388 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661392 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.661396 | orchestrator | } 2026-03-29 01:37:13.661400 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661403 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661407 | orchestrator | } 2026-03-29 01:37:13.661411 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661415 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.661418 | orchestrator | } 2026-03-29 01:37:13.661422 | orchestrator | 2026-03-29 01:37:13.661426 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.661430 | orchestrator | 2026-03-29 01:37:13.661434 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.661437 | orchestrator | + ip_address = "192.168.16.10" 2026-03-29 01:37:13.661441 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661445 | orchestrator | } 2026-03-29 01:37:13.661449 | orchestrator | } 2026-03-29 01:37:13.661452 | orchestrator | 2026-03-29 01:37:13.661456 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-29 01:37:13.661460 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.661466 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661470 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661473 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661477 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661481 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661485 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661488 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661492 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661496 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661500 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661503 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661507 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661511 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661515 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661518 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661522 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661526 | orchestrator | 2026-03-29 01:37:13.661530 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661533 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.661537 | orchestrator | } 2026-03-29 01:37:13.661541 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661548 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661552 | orchestrator | } 2026-03-29 01:37:13.661556 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661560 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.661564 | orchestrator | } 2026-03-29 01:37:13.661567 | orchestrator | 2026-03-29 01:37:13.661571 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.661575 | orchestrator | 2026-03-29 01:37:13.661579 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.661583 | orchestrator | + ip_address = "192.168.16.11" 2026-03-29 01:37:13.661586 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661590 | orchestrator | } 2026-03-29 01:37:13.661594 | orchestrator | } 2026-03-29 01:37:13.661598 | orchestrator | 2026-03-29 01:37:13.661601 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-29 01:37:13.661605 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.661609 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661613 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661617 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661621 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661627 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661630 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661634 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661638 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661642 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661645 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661649 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661653 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661657 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661660 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661664 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661668 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661672 | orchestrator | 2026-03-29 01:37:13.661675 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661679 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.661683 | orchestrator | } 2026-03-29 01:37:13.661687 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661690 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661694 | orchestrator | } 2026-03-29 01:37:13.661698 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661701 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.661705 | orchestrator | } 2026-03-29 01:37:13.661709 | orchestrator | 2026-03-29 01:37:13.661713 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.661716 | orchestrator | 2026-03-29 01:37:13.661720 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.661724 | orchestrator | + ip_address = "192.168.16.12" 2026-03-29 01:37:13.661728 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661732 | orchestrator | } 2026-03-29 01:37:13.661735 | orchestrator | } 2026-03-29 01:37:13.661739 | orchestrator | 2026-03-29 01:37:13.661743 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-29 01:37:13.661747 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.661750 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661754 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661758 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661762 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661765 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661769 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661773 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661777 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661780 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661784 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661788 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661792 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661795 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661799 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661803 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661806 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661810 | orchestrator | 2026-03-29 01:37:13.661814 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661818 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.661822 | orchestrator | } 2026-03-29 01:37:13.661825 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661829 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661833 | orchestrator | } 2026-03-29 01:37:13.661837 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661840 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.661844 | orchestrator | } 2026-03-29 01:37:13.661848 | orchestrator | 2026-03-29 01:37:13.661854 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.661858 | orchestrator | 2026-03-29 01:37:13.661861 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.661865 | orchestrator | + ip_address = "192.168.16.13" 2026-03-29 01:37:13.661869 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.661873 | orchestrator | } 2026-03-29 01:37:13.661876 | orchestrator | } 2026-03-29 01:37:13.661880 | orchestrator | 2026-03-29 01:37:13.661884 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-29 01:37:13.661888 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.661892 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.661895 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.661899 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.661903 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.661906 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.661910 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.661914 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.661918 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.661923 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.661927 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.661931 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.661935 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.661938 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.661942 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.661946 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.661950 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.661954 | orchestrator | 2026-03-29 01:37:13.661960 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661965 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.661969 | orchestrator | } 2026-03-29 01:37:13.661973 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661977 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.661981 | orchestrator | } 2026-03-29 01:37:13.661984 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.661988 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.661992 | orchestrator | } 2026-03-29 01:37:13.661996 | orchestrator | 2026-03-29 01:37:13.661999 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.662003 | orchestrator | 2026-03-29 01:37:13.662007 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.662011 | orchestrator | + ip_address = "192.168.16.14" 2026-03-29 01:37:13.662081 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.662086 | orchestrator | } 2026-03-29 01:37:13.662089 | orchestrator | } 2026-03-29 01:37:13.662093 | orchestrator | 2026-03-29 01:37:13.662097 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-29 01:37:13.662101 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 01:37:13.662105 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.662108 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 01:37:13.662112 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 01:37:13.662116 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.662120 | orchestrator | + device_id = (known after apply) 2026-03-29 01:37:13.662124 | orchestrator | + device_owner = (known after apply) 2026-03-29 01:37:13.662127 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 01:37:13.662131 | orchestrator | + dns_name = (known after apply) 2026-03-29 01:37:13.662135 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662138 | orchestrator | + mac_address = (known after apply) 2026-03-29 01:37:13.662142 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.662146 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 01:37:13.662150 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 01:37:13.662156 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662160 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 01:37:13.662164 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662167 | orchestrator | 2026-03-29 01:37:13.662171 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.662175 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 01:37:13.662179 | orchestrator | } 2026-03-29 01:37:13.662182 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.662186 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 01:37:13.662190 | orchestrator | } 2026-03-29 01:37:13.662194 | orchestrator | + allowed_address_pairs { 2026-03-29 01:37:13.662197 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 01:37:13.662201 | orchestrator | } 2026-03-29 01:37:13.662205 | orchestrator | 2026-03-29 01:37:13.662209 | orchestrator | + binding (known after apply) 2026-03-29 01:37:13.662213 | orchestrator | 2026-03-29 01:37:13.662216 | orchestrator | + fixed_ip { 2026-03-29 01:37:13.662220 | orchestrator | + ip_address = "192.168.16.15" 2026-03-29 01:37:13.662224 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.662228 | orchestrator | } 2026-03-29 01:37:13.662232 | orchestrator | } 2026-03-29 01:37:13.662236 | orchestrator | 2026-03-29 01:37:13.662239 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-29 01:37:13.662243 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-29 01:37:13.662247 | orchestrator | + force_destroy = false 2026-03-29 01:37:13.662251 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662255 | orchestrator | + port_id = (known after apply) 2026-03-29 01:37:13.662258 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662262 | orchestrator | + router_id = (known after apply) 2026-03-29 01:37:13.662266 | orchestrator | + subnet_id = (known after apply) 2026-03-29 01:37:13.662270 | orchestrator | } 2026-03-29 01:37:13.662273 | orchestrator | 2026-03-29 01:37:13.662277 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-29 01:37:13.662281 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-29 01:37:13.662285 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 01:37:13.662288 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.662292 | orchestrator | + availability_zone_hints = [ 2026-03-29 01:37:13.662296 | orchestrator | + "nova", 2026-03-29 01:37:13.662300 | orchestrator | ] 2026-03-29 01:37:13.662303 | orchestrator | + distributed = (known after apply) 2026-03-29 01:37:13.662307 | orchestrator | + enable_snat = (known after apply) 2026-03-29 01:37:13.662311 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-29 01:37:13.662315 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-29 01:37:13.662319 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662322 | orchestrator | + name = "testbed" 2026-03-29 01:37:13.662326 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662330 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662334 | orchestrator | 2026-03-29 01:37:13.662338 | orchestrator | + external_fixed_ip (known after apply) 2026-03-29 01:37:13.662341 | orchestrator | } 2026-03-29 01:37:13.662345 | orchestrator | 2026-03-29 01:37:13.662349 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-29 01:37:13.662353 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-29 01:37:13.662357 | orchestrator | + description = "ssh" 2026-03-29 01:37:13.662361 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662365 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662368 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662372 | orchestrator | + port_range_max = 22 2026-03-29 01:37:13.662397 | orchestrator | + port_range_min = 22 2026-03-29 01:37:13.662402 | orchestrator | + protocol = "tcp" 2026-03-29 01:37:13.662406 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662414 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662418 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662422 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662425 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662429 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662433 | orchestrator | } 2026-03-29 01:37:13.662436 | orchestrator | 2026-03-29 01:37:13.662440 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-29 01:37:13.662444 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-29 01:37:13.662448 | orchestrator | + description = "wireguard" 2026-03-29 01:37:13.662452 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662459 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662463 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662466 | orchestrator | + port_range_max = 51820 2026-03-29 01:37:13.662470 | orchestrator | + port_range_min = 51820 2026-03-29 01:37:13.662474 | orchestrator | + protocol = "udp" 2026-03-29 01:37:13.662478 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662482 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662486 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662489 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662493 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662497 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662500 | orchestrator | } 2026-03-29 01:37:13.662504 | orchestrator | 2026-03-29 01:37:13.662508 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-29 01:37:13.662512 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-29 01:37:13.662518 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662522 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662526 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662530 | orchestrator | + protocol = "tcp" 2026-03-29 01:37:13.662533 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662537 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662541 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662545 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 01:37:13.662548 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662552 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662556 | orchestrator | } 2026-03-29 01:37:13.662560 | orchestrator | 2026-03-29 01:37:13.662563 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-29 01:37:13.662567 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-29 01:37:13.662571 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662575 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662578 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662582 | orchestrator | + protocol = "udp" 2026-03-29 01:37:13.662586 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662589 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662593 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662597 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 01:37:13.662600 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662604 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662608 | orchestrator | } 2026-03-29 01:37:13.662612 | orchestrator | 2026-03-29 01:37:13.662615 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-29 01:37:13.662622 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-29 01:37:13.662625 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662629 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662633 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662637 | orchestrator | + protocol = "icmp" 2026-03-29 01:37:13.662640 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662644 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662648 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662652 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662655 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662659 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662663 | orchestrator | } 2026-03-29 01:37:13.662666 | orchestrator | 2026-03-29 01:37:13.662670 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-29 01:37:13.662674 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-29 01:37:13.662678 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662681 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662685 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662689 | orchestrator | + protocol = "tcp" 2026-03-29 01:37:13.662692 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662696 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662700 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662704 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662707 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662711 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662715 | orchestrator | } 2026-03-29 01:37:13.662719 | orchestrator | 2026-03-29 01:37:13.662722 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-29 01:37:13.662726 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-29 01:37:13.662730 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662734 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662737 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662741 | orchestrator | + protocol = "udp" 2026-03-29 01:37:13.662745 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662749 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662753 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662756 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662760 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662764 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662767 | orchestrator | } 2026-03-29 01:37:13.662771 | orchestrator | 2026-03-29 01:37:13.662775 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-29 01:37:13.662779 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-29 01:37:13.662785 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662789 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662793 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662796 | orchestrator | + protocol = "icmp" 2026-03-29 01:37:13.662800 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662804 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662807 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662811 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662815 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662819 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662825 | orchestrator | } 2026-03-29 01:37:13.662828 | orchestrator | 2026-03-29 01:37:13.662832 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-29 01:37:13.662836 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-29 01:37:13.662840 | orchestrator | + description = "vrrp" 2026-03-29 01:37:13.662843 | orchestrator | + direction = "ingress" 2026-03-29 01:37:13.662847 | orchestrator | + ethertype = "IPv4" 2026-03-29 01:37:13.662851 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662855 | orchestrator | + protocol = "112" 2026-03-29 01:37:13.662858 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662862 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 01:37:13.662866 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 01:37:13.662870 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 01:37:13.662873 | orchestrator | + security_group_id = (known after apply) 2026-03-29 01:37:13.662877 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662881 | orchestrator | } 2026-03-29 01:37:13.662884 | orchestrator | 2026-03-29 01:37:13.662888 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-29 01:37:13.662892 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-29 01:37:13.662896 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.662899 | orchestrator | + description = "management security group" 2026-03-29 01:37:13.662903 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662907 | orchestrator | + name = "testbed-management" 2026-03-29 01:37:13.662911 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662914 | orchestrator | + stateful = (known after apply) 2026-03-29 01:37:13.662918 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662922 | orchestrator | } 2026-03-29 01:37:13.662933 | orchestrator | 2026-03-29 01:37:13.662937 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-29 01:37:13.662941 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-29 01:37:13.662945 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.662948 | orchestrator | + description = "node security group" 2026-03-29 01:37:13.662952 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.662956 | orchestrator | + name = "testbed-node" 2026-03-29 01:37:13.662960 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.662963 | orchestrator | + stateful = (known after apply) 2026-03-29 01:37:13.662967 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.662971 | orchestrator | } 2026-03-29 01:37:13.662974 | orchestrator | 2026-03-29 01:37:13.662978 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-29 01:37:13.662982 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-29 01:37:13.662986 | orchestrator | + all_tags = (known after apply) 2026-03-29 01:37:13.662989 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-29 01:37:13.662993 | orchestrator | + dns_nameservers = [ 2026-03-29 01:37:13.662997 | orchestrator | + "8.8.8.8", 2026-03-29 01:37:13.663001 | orchestrator | + "9.9.9.9", 2026-03-29 01:37:13.663005 | orchestrator | ] 2026-03-29 01:37:13.663008 | orchestrator | + enable_dhcp = true 2026-03-29 01:37:13.663012 | orchestrator | + gateway_ip = (known after apply) 2026-03-29 01:37:13.663018 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.663022 | orchestrator | + ip_version = 4 2026-03-29 01:37:13.663025 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-29 01:37:13.663029 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-29 01:37:13.663033 | orchestrator | + name = "subnet-testbed-management" 2026-03-29 01:37:13.663037 | orchestrator | + network_id = (known after apply) 2026-03-29 01:37:13.663040 | orchestrator | + no_gateway = false 2026-03-29 01:37:13.663044 | orchestrator | + region = (known after apply) 2026-03-29 01:37:13.663048 | orchestrator | + service_types = (known after apply) 2026-03-29 01:37:13.663054 | orchestrator | + tenant_id = (known after apply) 2026-03-29 01:37:13.663058 | orchestrator | 2026-03-29 01:37:13.663061 | orchestrator | + allocation_pool { 2026-03-29 01:37:13.663065 | orchestrator | + end = "192.168.31.250" 2026-03-29 01:37:13.663069 | orchestrator | + start = "192.168.31.200" 2026-03-29 01:37:13.663073 | orchestrator | } 2026-03-29 01:37:13.663076 | orchestrator | } 2026-03-29 01:37:13.663080 | orchestrator | 2026-03-29 01:37:13.663084 | orchestrator | # terraform_data.image will be created 2026-03-29 01:37:13.663087 | orchestrator | + resource "terraform_data" "image" { 2026-03-29 01:37:13.663091 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.663095 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 01:37:13.663099 | orchestrator | + output = (known after apply) 2026-03-29 01:37:13.663102 | orchestrator | } 2026-03-29 01:37:13.663106 | orchestrator | 2026-03-29 01:37:13.663110 | orchestrator | # terraform_data.image_node will be created 2026-03-29 01:37:13.663114 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-29 01:37:13.663159 | orchestrator | + id = (known after apply) 2026-03-29 01:37:13.663165 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 01:37:13.663169 | orchestrator | + output = (known after apply) 2026-03-29 01:37:13.663173 | orchestrator | } 2026-03-29 01:37:13.663177 | orchestrator | 2026-03-29 01:37:13.663181 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-29 01:37:13.663184 | orchestrator | 2026-03-29 01:37:13.663188 | orchestrator | Changes to Outputs: 2026-03-29 01:37:13.663192 | orchestrator | + manager_address = (sensitive value) 2026-03-29 01:37:13.663196 | orchestrator | + private_key = (sensitive value) 2026-03-29 01:37:13.875356 | orchestrator | terraform_data.image_node: Creating... 2026-03-29 01:37:13.875409 | orchestrator | terraform_data.image: Creating... 2026-03-29 01:37:13.875416 | orchestrator | terraform_data.image: Creation complete after 0s [id=af493c53-0722-e6f8-653d-8e8fdddce437] 2026-03-29 01:37:13.875421 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=148368b2-708d-ae2c-d77b-25d46a2ff299] 2026-03-29 01:37:13.888226 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-29 01:37:13.895772 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-29 01:37:13.896567 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-29 01:37:13.898422 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-29 01:37:13.899908 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-29 01:37:13.901420 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-29 01:37:13.914584 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-29 01:37:13.915499 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-29 01:37:13.915694 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-29 01:37:13.915865 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-29 01:37:14.377523 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 01:37:14.383495 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-29 01:37:14.419068 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-29 01:37:14.426156 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-29 01:37:14.995254 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=6315e652-2ce3-45f2-b9a7-f41e161dd12a] 2026-03-29 01:37:14.999827 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-29 01:37:15.054640 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 01:37:15.067082 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-29 01:37:17.531133 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d786153b-aa88-42e2-b7c0-be41a0e4d472] 2026-03-29 01:37:17.546260 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-29 01:37:17.552829 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=be2200f0-5502-47ad-8b86-f79404ad3d6e] 2026-03-29 01:37:17.554501 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=d8e8d83960111858a582a5b5ed99b364d35df76f] 2026-03-29 01:37:17.561212 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-29 01:37:17.562891 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-29 01:37:17.567576 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=ee98996d-a6b6-4070-b987-1a6503ed9735] 2026-03-29 01:37:17.573678 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=5e5fd293649a633f63ace999bee1ac70c10326eb] 2026-03-29 01:37:17.583668 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=ef57056d-cdc7-4754-ab80-1b6d0ee4138b] 2026-03-29 01:37:17.585813 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-29 01:37:17.590479 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-29 01:37:17.592175 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-29 01:37:17.602339 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=3d42ed5a-37f6-4df6-b807-f02e933f3249] 2026-03-29 01:37:17.608472 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-29 01:37:17.622149 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=002a7ab0-e850-4de5-8841-9c71e722e4fa] 2026-03-29 01:37:17.630312 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-29 01:37:17.660133 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=93baa594-14d8-4050-b691-1dff11f6053a] 2026-03-29 01:37:17.673201 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-29 01:37:17.722529 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=2180dd6a-0158-4028-8893-0009518a5de0] 2026-03-29 01:37:17.728787 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=10b9e860-1cc5-4615-8ff0-9bdd7bb94f62] 2026-03-29 01:37:18.517640 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c661f325-f619-40ff-ad64-50feccc9e71d] 2026-03-29 01:37:18.525021 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-29 01:37:18.561464 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=641edd66-c7f1-4829-b4ab-a5be1c0d9fdc] 2026-03-29 01:37:20.937918 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=b1dc5caa-ccc2-437e-ad49-4eaa58060366] 2026-03-29 01:37:20.948523 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-29 01:37:20.951012 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-29 01:37:20.951318 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-29 01:37:21.003340 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=9b0adc3c-7f5d-4894-a427-7dd9e74f1d22] 2026-03-29 01:37:21.012845 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=8615e525-25c8-40da-bcc5-a75883081ac3] 2026-03-29 01:37:21.045225 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ccc377a4-68eb-41df-b094-e638a3387548] 2026-03-29 01:37:21.048122 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=160e36ea-4e1e-4f6f-a576-5c1ba660feb6] 2026-03-29 01:37:21.059027 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ee30bf19-1ab6-4918-a8c8-c92c337d13e6] 2026-03-29 01:37:21.062995 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=36bedc35-435f-4980-812f-4ca1d4f6c7bb] 2026-03-29 01:37:21.145766 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=6ca033d6-5a64-4a78-8772-a0c81cd914bb] 2026-03-29 01:37:21.158266 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-29 01:37:21.158881 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-29 01:37:21.161232 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-29 01:37:21.161547 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-29 01:37:21.161910 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-29 01:37:21.163831 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-29 01:37:21.167703 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-29 01:37:21.169502 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-29 01:37:21.189675 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=eccfc65a-e137-42df-8a5d-a43a4d749f28] 2026-03-29 01:37:21.196430 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-29 01:37:21.468844 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=1200f729-be3f-425f-8a09-0e4fd6133f98] 2026-03-29 01:37:21.481278 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-29 01:37:21.738240 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=682aa065-382b-4348-802e-86487d43676a] 2026-03-29 01:37:21.745698 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-29 01:37:21.773430 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e7b0ba07-53d9-483c-8cfc-3fc6550937da] 2026-03-29 01:37:21.779615 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-29 01:37:21.989222 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=87220cdf-80e3-4e8a-9fa4-e81bfcc16842] 2026-03-29 01:37:21.996826 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-29 01:37:22.015578 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=da175101-a1f2-4856-b695-24fe226ce305] 2026-03-29 01:37:22.023240 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-29 01:37:22.078525 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1dd0c98c-6e5e-440b-be1c-852997353c7d] 2026-03-29 01:37:22.084940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-29 01:37:22.120744 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=bf131a14-f29c-4991-873e-0da66418e4f9] 2026-03-29 01:37:22.128555 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-29 01:37:22.130429 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=70ee15f9-eea2-4003-8422-2da72d168ead] 2026-03-29 01:37:22.134983 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=4112fece-6fbf-46f1-971a-0fc45bd22d7a] 2026-03-29 01:37:22.253656 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=45b235e0-b2c2-4b14-bacc-7df4170d817a] 2026-03-29 01:37:22.350285 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=fc697a77-007b-4860-9e1e-56c15d06b711] 2026-03-29 01:37:22.352472 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=64283d06-eb2c-4c4a-a272-54b74d3719aa] 2026-03-29 01:37:22.656120 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f3517c73-8467-4fbd-9997-a88743018d10] 2026-03-29 01:37:22.694464 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=98404865-9f16-4e21-9c63-cc1d8fc01055] 2026-03-29 01:37:22.862410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9edc7bd3-03fa-4471-af37-fee85c651323] 2026-03-29 01:37:22.862489 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=87019d94-1cf5-4e12-9173-da7b5d6a8407] 2026-03-29 01:37:23.139231 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=050841b0-7196-4984-b353-ff43abdeda27] 2026-03-29 01:37:23.160968 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-29 01:37:23.176787 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-29 01:37:23.183975 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-29 01:37:23.190242 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-29 01:37:23.191548 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-29 01:37:23.193668 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-29 01:37:23.194886 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-29 01:37:24.759342 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=68db817c-049d-41b2-8a0c-52ad1e575b33] 2026-03-29 01:37:24.770522 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-29 01:37:24.773894 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-29 01:37:24.775659 | orchestrator | local_file.inventory: Creating... 2026-03-29 01:37:24.779518 | orchestrator | local_file.inventory: Creation complete after 0s [id=01c905a25aa38c693cafb8bee5f181791d1df57c] 2026-03-29 01:37:24.779561 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=eeedd1fa5e1c772c317f528a72241f87550db44f] 2026-03-29 01:37:26.009654 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=68db817c-049d-41b2-8a0c-52ad1e575b33] 2026-03-29 01:37:33.191730 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-29 01:37:33.191850 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-29 01:37:33.191886 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-29 01:37:33.198830 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-29 01:37:33.199930 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-29 01:37:33.199964 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-29 01:37:43.192249 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-29 01:37:43.192440 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-29 01:37:43.192523 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-29 01:37:43.199968 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-29 01:37:43.200058 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-29 01:37:43.200136 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-29 01:37:43.866194 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=63984b7f-e223-42d4-a2c6-999640c58396] 2026-03-29 01:37:43.896415 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=08e3a31e-75ae-46ce-bbfc-e1dbfadd8a78] 2026-03-29 01:37:44.117539 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=d1f41c98-978f-420d-8f98-24bcf1a102de] 2026-03-29 01:37:53.193619 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-29 01:37:53.193839 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-29 01:37:53.200988 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-29 01:37:53.854450 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2c3b5918-fd86-4336-9dbf-00dcb0bb7193] 2026-03-29 01:37:53.884507 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=45e3c456-678f-477b-af61-3f5f72c98f33] 2026-03-29 01:37:54.187198 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=6b7785b0-59a9-4e5d-a0b5-0e78857954f4] 2026-03-29 01:37:54.202442 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-29 01:37:54.209778 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6379670260878945890] 2026-03-29 01:37:54.212988 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-29 01:37:54.213031 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-29 01:37:54.215151 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-29 01:37:54.221466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-29 01:37:54.223949 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-29 01:37:54.228234 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-29 01:37:54.229029 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-29 01:37:54.242939 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-29 01:37:54.243003 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-29 01:37:54.260018 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-29 01:37:57.604548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=2c3b5918-fd86-4336-9dbf-00dcb0bb7193/be2200f0-5502-47ad-8b86-f79404ad3d6e] 2026-03-29 01:37:57.615037 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=d1f41c98-978f-420d-8f98-24bcf1a102de/93baa594-14d8-4050-b691-1dff11f6053a] 2026-03-29 01:37:57.632831 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=6b7785b0-59a9-4e5d-a0b5-0e78857954f4/ef57056d-cdc7-4754-ab80-1b6d0ee4138b] 2026-03-29 01:37:57.633766 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=2c3b5918-fd86-4336-9dbf-00dcb0bb7193/3d42ed5a-37f6-4df6-b807-f02e933f3249] 2026-03-29 01:37:57.659455 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d1f41c98-978f-420d-8f98-24bcf1a102de/10b9e860-1cc5-4615-8ff0-9bdd7bb94f62] 2026-03-29 01:37:57.666807 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=6b7785b0-59a9-4e5d-a0b5-0e78857954f4/002a7ab0-e850-4de5-8841-9c71e722e4fa] 2026-03-29 01:38:03.737622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=d1f41c98-978f-420d-8f98-24bcf1a102de/2180dd6a-0158-4028-8893-0009518a5de0] 2026-03-29 01:38:03.740240 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=2c3b5918-fd86-4336-9dbf-00dcb0bb7193/d786153b-aa88-42e2-b7c0-be41a0e4d472] 2026-03-29 01:38:03.767798 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=6b7785b0-59a9-4e5d-a0b5-0e78857954f4/ee98996d-a6b6-4070-b987-1a6503ed9735] 2026-03-29 01:38:04.260829 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-29 01:38:14.261757 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-29 01:38:14.527197 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=cbb626c4-f763-451f-ba27-0de02e6670f3] 2026-03-29 01:38:14.551146 | orchestrator | 2026-03-29 01:38:14.551236 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-29 01:38:14.551248 | orchestrator | 2026-03-29 01:38:14.551257 | orchestrator | Outputs: 2026-03-29 01:38:14.551266 | orchestrator | 2026-03-29 01:38:14.551274 | orchestrator | manager_address = 2026-03-29 01:38:14.551283 | orchestrator | private_key = 2026-03-29 01:38:14.700058 | orchestrator | ok: Runtime: 0:01:06.991639 2026-03-29 01:38:14.730931 | 2026-03-29 01:38:14.731079 | TASK [Fetch manager address] 2026-03-29 01:38:15.214442 | orchestrator | ok 2026-03-29 01:38:15.225346 | 2026-03-29 01:38:15.225483 | TASK [Set manager_host address] 2026-03-29 01:38:15.306204 | orchestrator | ok 2026-03-29 01:38:15.315599 | 2026-03-29 01:38:15.315742 | LOOP [Update ansible collections] 2026-03-29 01:38:16.784063 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 01:38:16.784366 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 01:38:16.784416 | orchestrator | Starting galaxy collection install process 2026-03-29 01:38:16.784458 | orchestrator | Process install dependency map 2026-03-29 01:38:16.784491 | orchestrator | Starting collection install process 2026-03-29 01:38:16.784522 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-29 01:38:16.784556 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-29 01:38:16.784590 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-29 01:38:16.784653 | orchestrator | ok: Item: commons Runtime: 0:00:01.126892 2026-03-29 01:38:17.742163 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 01:38:17.742373 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 01:38:17.742446 | orchestrator | Starting galaxy collection install process 2026-03-29 01:38:17.742500 | orchestrator | Process install dependency map 2026-03-29 01:38:17.742547 | orchestrator | Starting collection install process 2026-03-29 01:38:17.742646 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-29 01:38:17.742692 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-29 01:38:17.742728 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-29 01:38:17.742789 | orchestrator | ok: Item: services Runtime: 0:00:00.628390 2026-03-29 01:38:17.761522 | 2026-03-29 01:38:17.761629 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 01:38:28.341358 | orchestrator | ok 2026-03-29 01:38:28.351993 | 2026-03-29 01:38:28.352164 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 01:39:28.392524 | orchestrator | ok 2026-03-29 01:39:28.405235 | 2026-03-29 01:39:28.405421 | TASK [Fetch manager ssh hostkey] 2026-03-29 01:39:29.985192 | orchestrator | Output suppressed because no_log was given 2026-03-29 01:39:30.000471 | 2026-03-29 01:39:30.000802 | TASK [Get ssh keypair from terraform environment] 2026-03-29 01:39:30.538508 | orchestrator | ok: Runtime: 0:00:00.010225 2026-03-29 01:39:30.554824 | 2026-03-29 01:39:30.555088 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 01:39:30.596388 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-29 01:39:30.606302 | 2026-03-29 01:39:30.606420 | TASK [Run manager part 0] 2026-03-29 01:39:31.660014 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 01:39:31.717234 | orchestrator | 2026-03-29 01:39:31.717289 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-29 01:39:31.717298 | orchestrator | 2026-03-29 01:39:31.717314 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-29 01:39:33.449801 | orchestrator | ok: [testbed-manager] 2026-03-29 01:39:33.449850 | orchestrator | 2026-03-29 01:39:33.449876 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 01:39:33.449888 | orchestrator | 2026-03-29 01:39:33.449899 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:39:35.274872 | orchestrator | ok: [testbed-manager] 2026-03-29 01:39:35.274904 | orchestrator | 2026-03-29 01:39:35.274909 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 01:39:35.883471 | orchestrator | ok: [testbed-manager] 2026-03-29 01:39:35.883545 | orchestrator | 2026-03-29 01:39:35.883561 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 01:39:35.924760 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:39:35.924824 | orchestrator | 2026-03-29 01:39:35.924838 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-29 01:39:35.970730 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:39:35.970798 | orchestrator | 2026-03-29 01:39:35.970806 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-29 01:39:36.012364 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:39:36.012449 | orchestrator | 2026-03-29 01:39:36.012458 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-29 01:39:36.725559 | orchestrator | changed: [testbed-manager] 2026-03-29 01:39:36.725618 | orchestrator | 2026-03-29 01:39:36.725626 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-29 01:42:19.562191 | orchestrator | changed: [testbed-manager] 2026-03-29 01:42:19.562417 | orchestrator | 2026-03-29 01:42:19.562444 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 01:43:34.267043 | orchestrator | changed: [testbed-manager] 2026-03-29 01:43:34.267144 | orchestrator | 2026-03-29 01:43:34.267164 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-29 01:43:53.325803 | orchestrator | changed: [testbed-manager] 2026-03-29 01:43:53.325891 | orchestrator | 2026-03-29 01:43:53.325912 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-29 01:44:01.565476 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:01.565624 | orchestrator | 2026-03-29 01:44:01.565653 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 01:44:01.618880 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:01.618966 | orchestrator | 2026-03-29 01:44:01.618982 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-29 01:44:02.416683 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:02.416780 | orchestrator | 2026-03-29 01:44:02.416801 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-29 01:44:03.153503 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:03.153656 | orchestrator | 2026-03-29 01:44:03.153680 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-29 01:44:08.745673 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:08.745789 | orchestrator | 2026-03-29 01:44:08.745810 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-29 01:44:14.161986 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:14.162121 | orchestrator | 2026-03-29 01:44:14.162138 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-29 01:44:16.757463 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:16.757537 | orchestrator | 2026-03-29 01:44:16.757554 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-29 01:44:18.444741 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:18.444844 | orchestrator | 2026-03-29 01:44:18.444864 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-29 01:44:19.511023 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 01:44:19.511084 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 01:44:19.511091 | orchestrator | 2026-03-29 01:44:19.511099 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-29 01:44:19.561422 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 01:44:19.561475 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 01:44:19.561481 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 01:44:19.561487 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 01:44:24.342675 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 01:44:24.342744 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 01:44:24.342759 | orchestrator | 2026-03-29 01:44:24.342771 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-29 01:44:24.899130 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:24.899166 | orchestrator | 2026-03-29 01:44:24.899173 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-29 01:44:43.738599 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-29 01:44:43.738755 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-29 01:44:43.738787 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-29 01:44:43.738809 | orchestrator | 2026-03-29 01:44:43.738828 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-29 01:44:45.969287 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-29 01:44:45.969376 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-29 01:44:45.969392 | orchestrator | 2026-03-29 01:44:45.969407 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-29 01:44:45.969420 | orchestrator | 2026-03-29 01:44:45.969432 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:44:47.333312 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:47.333432 | orchestrator | 2026-03-29 01:44:47.333451 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 01:44:47.380765 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:47.380856 | orchestrator | 2026-03-29 01:44:47.380871 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 01:44:47.455998 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:47.456049 | orchestrator | 2026-03-29 01:44:47.456056 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 01:44:48.227341 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:48.227396 | orchestrator | 2026-03-29 01:44:48.227406 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 01:44:48.903397 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:48.903438 | orchestrator | 2026-03-29 01:44:48.903445 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 01:44:50.195165 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-29 01:44:50.195223 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-29 01:44:50.195234 | orchestrator | 2026-03-29 01:44:50.195244 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 01:44:51.569903 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:51.569976 | orchestrator | 2026-03-29 01:44:51.569991 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 01:44:53.288518 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:44:53.288618 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-29 01:44:53.288688 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:44:53.288703 | orchestrator | 2026-03-29 01:44:53.288716 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 01:44:53.355359 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:53.355439 | orchestrator | 2026-03-29 01:44:53.355449 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 01:44:53.430417 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:53.430469 | orchestrator | 2026-03-29 01:44:53.430476 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 01:44:53.960219 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:53.960327 | orchestrator | 2026-03-29 01:44:53.960350 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 01:44:54.030374 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:54.030463 | orchestrator | 2026-03-29 01:44:54.030478 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 01:44:54.863876 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 01:44:54.863963 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:54.863980 | orchestrator | 2026-03-29 01:44:54.863993 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 01:44:54.897446 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:54.897505 | orchestrator | 2026-03-29 01:44:54.897514 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 01:44:54.935899 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:54.935964 | orchestrator | 2026-03-29 01:44:54.935973 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 01:44:54.973271 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:54.973357 | orchestrator | 2026-03-29 01:44:54.973373 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 01:44:55.048433 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:44:55.048510 | orchestrator | 2026-03-29 01:44:55.048523 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 01:44:55.776687 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:55.776782 | orchestrator | 2026-03-29 01:44:55.776799 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 01:44:55.776812 | orchestrator | 2026-03-29 01:44:55.776826 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:44:57.186695 | orchestrator | ok: [testbed-manager] 2026-03-29 01:44:57.186764 | orchestrator | 2026-03-29 01:44:57.186776 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-29 01:44:58.134487 | orchestrator | changed: [testbed-manager] 2026-03-29 01:44:58.134523 | orchestrator | 2026-03-29 01:44:58.134528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:44:58.134534 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-29 01:44:58.134538 | orchestrator | 2026-03-29 01:44:58.353062 | orchestrator | ok: Runtime: 0:05:27.318615 2026-03-29 01:44:58.370765 | 2026-03-29 01:44:58.370945 | TASK [Point out that the log in on the manager is now possible] 2026-03-29 01:44:58.408496 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-29 01:44:58.420634 | 2026-03-29 01:44:58.420817 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 01:44:58.457128 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-29 01:44:58.470912 | 2026-03-29 01:44:58.471070 | TASK [Run manager part 1 + 2] 2026-03-29 01:44:59.382715 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 01:44:59.441754 | orchestrator | 2026-03-29 01:44:59.441862 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-29 01:44:59.441893 | orchestrator | 2026-03-29 01:44:59.441937 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:45:01.787285 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:01.787327 | orchestrator | 2026-03-29 01:45:01.787352 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-29 01:45:01.832443 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:45:01.832486 | orchestrator | 2026-03-29 01:45:01.832494 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 01:45:01.880909 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:01.880954 | orchestrator | 2026-03-29 01:45:01.880963 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 01:45:01.927622 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:01.927725 | orchestrator | 2026-03-29 01:45:01.927749 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 01:45:01.986005 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:01.986142 | orchestrator | 2026-03-29 01:45:01.986159 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 01:45:02.055505 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:02.055572 | orchestrator | 2026-03-29 01:45:02.055590 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 01:45:02.114693 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-29 01:45:02.114738 | orchestrator | 2026-03-29 01:45:02.114744 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 01:45:02.821279 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:02.821349 | orchestrator | 2026-03-29 01:45:02.821371 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 01:45:02.866419 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:45:02.866482 | orchestrator | 2026-03-29 01:45:02.866495 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 01:45:04.197987 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:04.257880 | orchestrator | 2026-03-29 01:45:04.257975 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 01:45:04.732934 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:04.733000 | orchestrator | 2026-03-29 01:45:04.733014 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 01:45:05.811340 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:05.811395 | orchestrator | 2026-03-29 01:45:05.811407 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 01:45:20.444730 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:20.444790 | orchestrator | 2026-03-29 01:45:20.444804 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 01:45:21.130568 | orchestrator | ok: [testbed-manager] 2026-03-29 01:45:21.130609 | orchestrator | 2026-03-29 01:45:21.130619 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 01:45:21.183626 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:45:21.183754 | orchestrator | 2026-03-29 01:45:21.183780 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-29 01:45:22.100265 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:22.100337 | orchestrator | 2026-03-29 01:45:22.100348 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-29 01:45:23.050141 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:23.050205 | orchestrator | 2026-03-29 01:45:23.050213 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-29 01:45:23.609840 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:23.609879 | orchestrator | 2026-03-29 01:45:23.609886 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-29 01:45:23.655238 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 01:45:23.655355 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 01:45:23.655369 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 01:45:23.655380 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 01:45:26.847159 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:26.847275 | orchestrator | 2026-03-29 01:45:26.847302 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-29 01:45:35.379302 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-29 01:45:35.379397 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-29 01:45:35.379414 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-29 01:45:35.379425 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-29 01:45:35.379442 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-29 01:45:35.379452 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-29 01:45:35.379462 | orchestrator | 2026-03-29 01:45:35.379472 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-29 01:45:37.157455 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:37.157507 | orchestrator | 2026-03-29 01:45:37.157514 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-29 01:45:40.219090 | orchestrator | changed: [testbed-manager] 2026-03-29 01:45:40.219174 | orchestrator | 2026-03-29 01:45:40.219192 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-29 01:45:40.257172 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:45:40.257282 | orchestrator | 2026-03-29 01:45:40.257308 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-29 01:47:13.933713 | orchestrator | changed: [testbed-manager] 2026-03-29 01:47:13.933860 | orchestrator | 2026-03-29 01:47:13.933888 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 01:47:15.033453 | orchestrator | ok: [testbed-manager] 2026-03-29 01:47:15.033557 | orchestrator | 2026-03-29 01:47:15.033575 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:47:15.033588 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-29 01:47:15.033599 | orchestrator | 2026-03-29 01:47:15.602106 | orchestrator | ok: Runtime: 0:02:16.341482 2026-03-29 01:47:15.619634 | 2026-03-29 01:47:15.619898 | TASK [Reboot manager] 2026-03-29 01:47:17.159101 | orchestrator | ok: Runtime: 0:00:00.948460 2026-03-29 01:47:17.176126 | 2026-03-29 01:47:17.176291 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 01:47:30.928149 | orchestrator | ok 2026-03-29 01:47:30.937208 | 2026-03-29 01:47:30.937325 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 01:48:30.990179 | orchestrator | ok 2026-03-29 01:48:30.999456 | 2026-03-29 01:48:30.999636 | TASK [Deploy manager + bootstrap nodes] 2026-03-29 01:48:33.397027 | orchestrator | 2026-03-29 01:48:33.397147 | orchestrator | # DEPLOY MANAGER 2026-03-29 01:48:33.397155 | orchestrator | 2026-03-29 01:48:33.397161 | orchestrator | + set -e 2026-03-29 01:48:33.397165 | orchestrator | + echo 2026-03-29 01:48:33.397171 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-29 01:48:33.397178 | orchestrator | + echo 2026-03-29 01:48:33.397202 | orchestrator | + cat /opt/manager-vars.sh 2026-03-29 01:48:33.400108 | orchestrator | export NUMBER_OF_NODES=6 2026-03-29 01:48:33.400150 | orchestrator | 2026-03-29 01:48:33.400156 | orchestrator | export CEPH_VERSION=reef 2026-03-29 01:48:33.400162 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-29 01:48:33.400167 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-29 01:48:33.400179 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-29 01:48:33.400183 | orchestrator | 2026-03-29 01:48:33.400191 | orchestrator | export ARA=false 2026-03-29 01:48:33.400195 | orchestrator | export DEPLOY_MODE=manager 2026-03-29 01:48:33.400202 | orchestrator | export TEMPEST=false 2026-03-29 01:48:33.400206 | orchestrator | export IS_ZUUL=true 2026-03-29 01:48:33.400210 | orchestrator | 2026-03-29 01:48:33.400218 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 01:48:33.400222 | orchestrator | export EXTERNAL_API=false 2026-03-29 01:48:33.400226 | orchestrator | 2026-03-29 01:48:33.400229 | orchestrator | export IMAGE_USER=ubuntu 2026-03-29 01:48:33.400236 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-29 01:48:33.400240 | orchestrator | 2026-03-29 01:48:33.400244 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-29 01:48:33.400248 | orchestrator | 2026-03-29 01:48:33.400252 | orchestrator | + echo 2026-03-29 01:48:33.400256 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:48:33.400909 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:48:33.400931 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:48:33.400942 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:48:33.400950 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:48:33.401141 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:48:33.401152 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:48:33.401159 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:48:33.401165 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:48:33.401172 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:48:33.401178 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:48:33.401186 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:48:33.401193 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:48:33.401200 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:48:33.401206 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:48:33.401221 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:48:33.401228 | orchestrator | ++ export ARA=false 2026-03-29 01:48:33.401236 | orchestrator | ++ ARA=false 2026-03-29 01:48:33.401242 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:48:33.401248 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:48:33.401260 | orchestrator | ++ export TEMPEST=false 2026-03-29 01:48:33.401267 | orchestrator | ++ TEMPEST=false 2026-03-29 01:48:33.401273 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:48:33.401280 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:48:33.401286 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 01:48:33.401292 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 01:48:33.401299 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:48:33.401305 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:48:33.401311 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:48:33.401317 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:48:33.401324 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:48:33.401330 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:48:33.401337 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:48:33.401344 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:48:33.401353 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-29 01:48:33.448900 | orchestrator | + docker version 2026-03-29 01:48:33.564008 | orchestrator | Client: Docker Engine - Community 2026-03-29 01:48:33.564105 | orchestrator | Version: 27.5.1 2026-03-29 01:48:33.564118 | orchestrator | API version: 1.47 2026-03-29 01:48:33.564126 | orchestrator | Go version: go1.22.11 2026-03-29 01:48:33.564133 | orchestrator | Git commit: 9f9e405 2026-03-29 01:48:33.564140 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 01:48:33.564148 | orchestrator | OS/Arch: linux/amd64 2026-03-29 01:48:33.564155 | orchestrator | Context: default 2026-03-29 01:48:33.564162 | orchestrator | 2026-03-29 01:48:33.564168 | orchestrator | Server: Docker Engine - Community 2026-03-29 01:48:33.564175 | orchestrator | Engine: 2026-03-29 01:48:33.564182 | orchestrator | Version: 27.5.1 2026-03-29 01:48:33.564189 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-29 01:48:33.564221 | orchestrator | Go version: go1.22.11 2026-03-29 01:48:33.564229 | orchestrator | Git commit: 4c9b3b0 2026-03-29 01:48:33.564248 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 01:48:33.564255 | orchestrator | OS/Arch: linux/amd64 2026-03-29 01:48:33.564262 | orchestrator | Experimental: false 2026-03-29 01:48:33.564268 | orchestrator | containerd: 2026-03-29 01:48:33.564275 | orchestrator | Version: v2.2.2 2026-03-29 01:48:33.564282 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-29 01:48:33.564289 | orchestrator | runc: 2026-03-29 01:48:33.564296 | orchestrator | Version: 1.3.4 2026-03-29 01:48:33.564303 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-29 01:48:33.564310 | orchestrator | docker-init: 2026-03-29 01:48:33.564316 | orchestrator | Version: 0.19.0 2026-03-29 01:48:33.564324 | orchestrator | GitCommit: de40ad0 2026-03-29 01:48:33.566317 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-29 01:48:33.575293 | orchestrator | + set -e 2026-03-29 01:48:33.575366 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:48:33.575374 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:48:33.575382 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:48:33.575389 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:48:33.575397 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:48:33.575404 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:48:33.575413 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:48:33.575421 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:48:33.575428 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:48:33.575436 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:48:33.575443 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:48:33.575451 | orchestrator | ++ export ARA=false 2026-03-29 01:48:33.575459 | orchestrator | ++ ARA=false 2026-03-29 01:48:33.575467 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:48:33.575474 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:48:33.575482 | orchestrator | ++ export TEMPEST=false 2026-03-29 01:48:33.575489 | orchestrator | ++ TEMPEST=false 2026-03-29 01:48:33.575497 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:48:33.575504 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:48:33.575512 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 01:48:33.575519 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 01:48:33.575527 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:48:33.575534 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:48:33.575541 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:48:33.575549 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:48:33.575557 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:48:33.575564 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:48:33.575571 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:48:33.575579 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:48:33.575586 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:48:33.575593 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:48:33.575600 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:48:33.575607 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:48:33.575617 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:48:33.575633 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 01:48:33.575641 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-29 01:48:33.582156 | orchestrator | + set -e 2026-03-29 01:48:33.582230 | orchestrator | + VERSION=9.5.0 2026-03-29 01:48:33.582243 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:48:33.590370 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 01:48:33.590470 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:48:33.594076 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:48:33.598087 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-29 01:48:33.605358 | orchestrator | /opt/configuration ~ 2026-03-29 01:48:33.605418 | orchestrator | + set -e 2026-03-29 01:48:33.605427 | orchestrator | + pushd /opt/configuration 2026-03-29 01:48:33.605436 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 01:48:33.606238 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 01:48:33.607341 | orchestrator | ++ deactivate nondestructive 2026-03-29 01:48:33.607377 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:33.607387 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:33.607417 | orchestrator | ++ hash -r 2026-03-29 01:48:33.607424 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:33.607430 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 01:48:33.607436 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 01:48:33.607442 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 01:48:33.607675 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 01:48:33.607690 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 01:48:33.607697 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 01:48:33.607704 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 01:48:33.607711 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:48:33.607719 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:48:33.607726 | orchestrator | ++ export PATH 2026-03-29 01:48:33.607733 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:33.607740 | orchestrator | ++ '[' -z '' ']' 2026-03-29 01:48:33.607746 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 01:48:33.607758 | orchestrator | ++ PS1='(venv) ' 2026-03-29 01:48:33.607765 | orchestrator | ++ export PS1 2026-03-29 01:48:33.607772 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 01:48:33.607778 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 01:48:33.607812 | orchestrator | ++ hash -r 2026-03-29 01:48:33.607820 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-29 01:48:34.535845 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-29 01:48:34.536495 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-29 01:48:34.538146 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-29 01:48:34.539246 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-29 01:48:34.540592 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-29 01:48:34.550335 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-29 01:48:34.551747 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-29 01:48:34.552719 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-29 01:48:34.554200 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-29 01:48:34.586494 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-29 01:48:34.587828 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-29 01:48:34.589476 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-29 01:48:34.590806 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-29 01:48:34.594682 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-29 01:48:34.790380 | orchestrator | ++ which gilt 2026-03-29 01:48:34.794200 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-29 01:48:34.794283 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-29 01:48:35.049205 | orchestrator | osism.cfg-generics: 2026-03-29 01:48:35.188296 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-29 01:48:35.188381 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-29 01:48:35.188426 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-29 01:48:35.188563 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-29 01:48:35.844306 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-29 01:48:35.852336 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-29 01:48:36.159133 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-29 01:48:36.201965 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 01:48:36.202065 | orchestrator | + deactivate 2026-03-29 01:48:36.202074 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 01:48:36.202081 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:48:36.202085 | orchestrator | + export PATH 2026-03-29 01:48:36.202089 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 01:48:36.202094 | orchestrator | + '[' -n '' ']' 2026-03-29 01:48:36.202100 | orchestrator | + hash -r 2026-03-29 01:48:36.202104 | orchestrator | + '[' -n '' ']' 2026-03-29 01:48:36.202109 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 01:48:36.202113 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 01:48:36.202117 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 01:48:36.202121 | orchestrator | + unset -f deactivate 2026-03-29 01:48:36.202125 | orchestrator | + popd 2026-03-29 01:48:36.202129 | orchestrator | ~ 2026-03-29 01:48:36.203564 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 01:48:36.203649 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-29 01:48:36.204201 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 01:48:36.260616 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 01:48:36.260741 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-29 01:48:36.261612 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 01:48:36.319104 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:48:36.319456 | orchestrator | ++ semver 2024.2 2025.1 2026-03-29 01:48:36.375600 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:48:36.375763 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-29 01:48:36.465354 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 01:48:36.465433 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 01:48:36.465440 | orchestrator | ++ deactivate nondestructive 2026-03-29 01:48:36.465445 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:36.465449 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:36.465453 | orchestrator | ++ hash -r 2026-03-29 01:48:36.465457 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:36.465461 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 01:48:36.465465 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 01:48:36.465469 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 01:48:36.465474 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 01:48:36.465478 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 01:48:36.465482 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 01:48:36.465486 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 01:48:36.465491 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:48:36.465511 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:48:36.465515 | orchestrator | ++ export PATH 2026-03-29 01:48:36.465519 | orchestrator | ++ '[' -n '' ']' 2026-03-29 01:48:36.465530 | orchestrator | ++ '[' -z '' ']' 2026-03-29 01:48:36.465534 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 01:48:36.465538 | orchestrator | ++ PS1='(venv) ' 2026-03-29 01:48:36.465542 | orchestrator | ++ export PS1 2026-03-29 01:48:36.465546 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 01:48:36.465550 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 01:48:36.465553 | orchestrator | ++ hash -r 2026-03-29 01:48:36.465893 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-29 01:48:37.427648 | orchestrator | 2026-03-29 01:48:37.427769 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-29 01:48:37.427782 | orchestrator | 2026-03-29 01:48:37.427788 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 01:48:37.971628 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:37.971799 | orchestrator | 2026-03-29 01:48:37.971817 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 01:48:38.926568 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:38.926751 | orchestrator | 2026-03-29 01:48:38.926817 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-29 01:48:38.926856 | orchestrator | 2026-03-29 01:48:38.926868 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:48:41.113864 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:41.113963 | orchestrator | 2026-03-29 01:48:41.113980 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-29 01:48:41.155195 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:41.155322 | orchestrator | 2026-03-29 01:48:41.155349 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-29 01:48:41.581337 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:41.581460 | orchestrator | 2026-03-29 01:48:41.581481 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-29 01:48:41.618329 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:48:41.618446 | orchestrator | 2026-03-29 01:48:41.618464 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 01:48:41.943441 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:41.943562 | orchestrator | 2026-03-29 01:48:41.943587 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-29 01:48:42.260875 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:42.260976 | orchestrator | 2026-03-29 01:48:42.260992 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-29 01:48:42.377535 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:48:42.377634 | orchestrator | 2026-03-29 01:48:42.377719 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-29 01:48:42.377737 | orchestrator | 2026-03-29 01:48:42.377749 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:48:44.086751 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:44.086933 | orchestrator | 2026-03-29 01:48:44.086964 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-29 01:48:44.172053 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-29 01:48:44.172140 | orchestrator | 2026-03-29 01:48:44.172152 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-29 01:48:44.223162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-29 01:48:44.223261 | orchestrator | 2026-03-29 01:48:44.223278 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-29 01:48:45.291717 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-29 01:48:45.291803 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-29 01:48:45.291813 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-29 01:48:45.291821 | orchestrator | 2026-03-29 01:48:45.291831 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-29 01:48:47.027745 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-29 01:48:47.027864 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-29 01:48:47.027882 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-29 01:48:47.027894 | orchestrator | 2026-03-29 01:48:47.027906 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-29 01:48:47.625343 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 01:48:47.625462 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:47.625481 | orchestrator | 2026-03-29 01:48:47.625500 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-29 01:48:48.250235 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 01:48:48.250336 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:48.250353 | orchestrator | 2026-03-29 01:48:48.250365 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-29 01:48:48.303637 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:48:48.303762 | orchestrator | 2026-03-29 01:48:48.303777 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-29 01:48:48.671576 | orchestrator | ok: [testbed-manager] 2026-03-29 01:48:48.671712 | orchestrator | 2026-03-29 01:48:48.671724 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-29 01:48:48.741955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-29 01:48:48.742099 | orchestrator | 2026-03-29 01:48:48.742115 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-29 01:48:49.773452 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:49.773552 | orchestrator | 2026-03-29 01:48:49.773568 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-29 01:48:50.522208 | orchestrator | changed: [testbed-manager] 2026-03-29 01:48:50.522279 | orchestrator | 2026-03-29 01:48:50.522287 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-29 01:49:04.971940 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:04.972030 | orchestrator | 2026-03-29 01:49:04.972040 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-29 01:49:05.013522 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:49:05.013583 | orchestrator | 2026-03-29 01:49:05.013608 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-29 01:49:05.013617 | orchestrator | 2026-03-29 01:49:05.013624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:49:06.834835 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:06.834966 | orchestrator | 2026-03-29 01:49:06.834986 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-29 01:49:06.930491 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-29 01:49:06.930596 | orchestrator | 2026-03-29 01:49:06.930613 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-29 01:49:06.984275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 01:49:06.984372 | orchestrator | 2026-03-29 01:49:06.984386 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-29 01:49:09.295890 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:09.295975 | orchestrator | 2026-03-29 01:49:09.295986 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-29 01:49:09.353478 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:09.353574 | orchestrator | 2026-03-29 01:49:09.353589 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-29 01:49:09.472689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-29 01:49:09.472790 | orchestrator | 2026-03-29 01:49:09.472808 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-29 01:49:12.242732 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-29 01:49:12.242847 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-29 01:49:12.242863 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-29 01:49:12.242874 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-29 01:49:12.242884 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-29 01:49:12.242898 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-29 01:49:12.242914 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-29 01:49:12.242930 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-29 01:49:12.242946 | orchestrator | 2026-03-29 01:49:12.242963 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-29 01:49:12.865297 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:12.865400 | orchestrator | 2026-03-29 01:49:12.865419 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-29 01:49:13.492615 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:13.492736 | orchestrator | 2026-03-29 01:49:13.492744 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-29 01:49:13.570789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-29 01:49:13.570868 | orchestrator | 2026-03-29 01:49:13.570879 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-29 01:49:14.733104 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-29 01:49:14.733187 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-29 01:49:14.733196 | orchestrator | 2026-03-29 01:49:14.733204 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-29 01:49:15.339232 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:15.339323 | orchestrator | 2026-03-29 01:49:15.339336 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-29 01:49:15.387819 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:49:15.387916 | orchestrator | 2026-03-29 01:49:15.387931 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-29 01:49:15.465894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-29 01:49:15.465997 | orchestrator | 2026-03-29 01:49:15.466074 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-29 01:49:16.054334 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:16.054430 | orchestrator | 2026-03-29 01:49:16.054445 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-29 01:49:16.114701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-29 01:49:16.114797 | orchestrator | 2026-03-29 01:49:16.114811 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-29 01:49:17.430360 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 01:49:17.430486 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 01:49:17.431238 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:17.431273 | orchestrator | 2026-03-29 01:49:17.431286 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-29 01:49:18.024727 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:18.024845 | orchestrator | 2026-03-29 01:49:18.024863 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-29 01:49:18.070986 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:49:18.071102 | orchestrator | 2026-03-29 01:49:18.071119 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-29 01:49:18.166607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-29 01:49:18.166727 | orchestrator | 2026-03-29 01:49:18.166739 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-29 01:49:18.607757 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:18.607881 | orchestrator | 2026-03-29 01:49:18.607899 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-29 01:49:18.938808 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:18.938882 | orchestrator | 2026-03-29 01:49:18.938890 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-29 01:49:20.022132 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-29 01:49:20.022226 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-29 01:49:20.022239 | orchestrator | 2026-03-29 01:49:20.022250 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-29 01:49:20.582931 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:20.583034 | orchestrator | 2026-03-29 01:49:20.583051 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-29 01:49:20.886469 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:20.886592 | orchestrator | 2026-03-29 01:49:20.886703 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-29 01:49:21.195801 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:21.195869 | orchestrator | 2026-03-29 01:49:21.195876 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-29 01:49:21.237495 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:49:21.237561 | orchestrator | 2026-03-29 01:49:21.237567 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-29 01:49:21.307855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-29 01:49:21.307991 | orchestrator | 2026-03-29 01:49:21.308016 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-29 01:49:21.339974 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:21.340086 | orchestrator | 2026-03-29 01:49:21.340107 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-29 01:49:23.109136 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-29 01:49:23.109247 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-29 01:49:23.109265 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-29 01:49:23.109278 | orchestrator | 2026-03-29 01:49:23.109290 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-29 01:49:23.714316 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:23.714440 | orchestrator | 2026-03-29 01:49:23.714458 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-29 01:49:24.335173 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:24.335278 | orchestrator | 2026-03-29 01:49:24.335294 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-29 01:49:24.947942 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:24.948054 | orchestrator | 2026-03-29 01:49:24.948071 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-29 01:49:25.022208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-29 01:49:25.022314 | orchestrator | 2026-03-29 01:49:25.022335 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-29 01:49:25.061438 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:25.061537 | orchestrator | 2026-03-29 01:49:25.061552 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-29 01:49:25.670380 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-29 01:49:25.670485 | orchestrator | 2026-03-29 01:49:25.670502 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-29 01:49:25.739035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-29 01:49:25.739123 | orchestrator | 2026-03-29 01:49:25.739135 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-29 01:49:26.406127 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:26.406209 | orchestrator | 2026-03-29 01:49:26.406218 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-29 01:49:27.002758 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:27.002850 | orchestrator | 2026-03-29 01:49:27.002866 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-29 01:49:27.057185 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:49:27.057278 | orchestrator | 2026-03-29 01:49:27.057293 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-29 01:49:27.119592 | orchestrator | ok: [testbed-manager] 2026-03-29 01:49:27.119737 | orchestrator | 2026-03-29 01:49:27.119746 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-29 01:49:27.918358 | orchestrator | changed: [testbed-manager] 2026-03-29 01:49:27.918441 | orchestrator | 2026-03-29 01:49:27.918453 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-29 01:50:34.615348 | orchestrator | changed: [testbed-manager] 2026-03-29 01:50:34.615487 | orchestrator | 2026-03-29 01:50:34.615517 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-29 01:50:36.569177 | orchestrator | ok: [testbed-manager] 2026-03-29 01:50:36.569291 | orchestrator | 2026-03-29 01:50:36.569306 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-29 01:50:36.611141 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:50:36.611240 | orchestrator | 2026-03-29 01:50:36.611255 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-29 01:50:38.873914 | orchestrator | changed: [testbed-manager] 2026-03-29 01:50:38.874071 | orchestrator | 2026-03-29 01:50:38.874097 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-29 01:50:38.966520 | orchestrator | ok: [testbed-manager] 2026-03-29 01:50:38.966631 | orchestrator | 2026-03-29 01:50:38.966640 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 01:50:38.966647 | orchestrator | 2026-03-29 01:50:38.966652 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-29 01:50:39.009893 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:50:39.009978 | orchestrator | 2026-03-29 01:50:39.009990 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-29 01:51:39.064926 | orchestrator | Pausing for 60 seconds 2026-03-29 01:51:39.065061 | orchestrator | changed: [testbed-manager] 2026-03-29 01:51:39.065129 | orchestrator | 2026-03-29 01:51:39.065151 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-29 01:51:42.203974 | orchestrator | changed: [testbed-manager] 2026-03-29 01:51:42.204064 | orchestrator | 2026-03-29 01:51:42.204076 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-29 01:52:23.606175 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-29 01:52:23.606348 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-29 01:52:23.606378 | orchestrator | changed: [testbed-manager] 2026-03-29 01:52:23.606401 | orchestrator | 2026-03-29 01:52:23.606453 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-29 01:52:33.157615 | orchestrator | changed: [testbed-manager] 2026-03-29 01:52:33.157763 | orchestrator | 2026-03-29 01:52:33.157791 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-29 01:52:33.268174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-29 01:52:33.268306 | orchestrator | 2026-03-29 01:52:33.268335 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 01:52:33.268355 | orchestrator | 2026-03-29 01:52:33.268374 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-29 01:52:33.318103 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:52:33.318204 | orchestrator | 2026-03-29 01:52:33.318219 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-29 01:52:33.381602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-29 01:52:33.381699 | orchestrator | 2026-03-29 01:52:33.381714 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-29 01:52:34.104962 | orchestrator | changed: [testbed-manager] 2026-03-29 01:52:34.105061 | orchestrator | 2026-03-29 01:52:34.105076 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-29 01:52:37.067086 | orchestrator | ok: [testbed-manager] 2026-03-29 01:52:37.067206 | orchestrator | 2026-03-29 01:52:37.067223 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-29 01:52:37.137512 | orchestrator | ok: [testbed-manager] => { 2026-03-29 01:52:37.137658 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-29 01:52:37.137680 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-29 01:52:37.137697 | orchestrator | "Checking running containers against expected versions...", 2026-03-29 01:52:37.137715 | orchestrator | "", 2026-03-29 01:52:37.137732 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-29 01:52:37.137748 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-29 01:52:37.137764 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.137781 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-29 01:52:37.137800 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.137817 | orchestrator | "", 2026-03-29 01:52:37.137835 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-29 01:52:37.137854 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-29 01:52:37.137867 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.137899 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-29 01:52:37.137910 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.137919 | orchestrator | "", 2026-03-29 01:52:37.137929 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-29 01:52:37.137938 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-29 01:52:37.137948 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.137958 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-29 01:52:37.137967 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.137977 | orchestrator | "", 2026-03-29 01:52:37.137986 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-29 01:52:37.137996 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-29 01:52:37.138005 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138015 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-29 01:52:37.138076 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138088 | orchestrator | "", 2026-03-29 01:52:37.138099 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-29 01:52:37.138112 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-29 01:52:37.138123 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138134 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-29 01:52:37.138145 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138156 | orchestrator | "", 2026-03-29 01:52:37.138167 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-29 01:52:37.138178 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138189 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138199 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138211 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138221 | orchestrator | "", 2026-03-29 01:52:37.138232 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-29 01:52:37.138243 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 01:52:37.138254 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138265 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 01:52:37.138276 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138287 | orchestrator | "", 2026-03-29 01:52:37.138298 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-29 01:52:37.138310 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 01:52:37.138320 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138331 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 01:52:37.138342 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138353 | orchestrator | "", 2026-03-29 01:52:37.138364 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-29 01:52:37.138375 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-29 01:52:37.138397 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138408 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-29 01:52:37.138420 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138429 | orchestrator | "", 2026-03-29 01:52:37.138438 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-29 01:52:37.138448 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 01:52:37.138457 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138467 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 01:52:37.138476 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138486 | orchestrator | "", 2026-03-29 01:52:37.138495 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-29 01:52:37.138504 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138514 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138531 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138541 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138550 | orchestrator | "", 2026-03-29 01:52:37.138588 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-29 01:52:37.138605 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138622 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138638 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138654 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138667 | orchestrator | "", 2026-03-29 01:52:37.138677 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-29 01:52:37.138687 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138697 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138706 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138715 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138725 | orchestrator | "", 2026-03-29 01:52:37.138735 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-29 01:52:37.138744 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138754 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138763 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138792 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138803 | orchestrator | "", 2026-03-29 01:52:37.138812 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-29 01:52:37.138822 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138831 | orchestrator | " Enabled: true", 2026-03-29 01:52:37.138849 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 01:52:37.138859 | orchestrator | " Status: ✅ MATCH", 2026-03-29 01:52:37.138869 | orchestrator | "", 2026-03-29 01:52:37.138878 | orchestrator | "=== Summary ===", 2026-03-29 01:52:37.138888 | orchestrator | "Errors (version mismatches): 0", 2026-03-29 01:52:37.138897 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-29 01:52:37.138907 | orchestrator | "", 2026-03-29 01:52:37.138917 | orchestrator | "✅ All running containers match expected versions!" 2026-03-29 01:52:37.138926 | orchestrator | ] 2026-03-29 01:52:37.138936 | orchestrator | } 2026-03-29 01:52:37.138946 | orchestrator | 2026-03-29 01:52:37.138955 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-29 01:52:37.178253 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:52:37.178349 | orchestrator | 2026-03-29 01:52:37.178364 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:52:37.178376 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-29 01:52:37.178388 | orchestrator | 2026-03-29 01:52:37.275360 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 01:52:37.275491 | orchestrator | + deactivate 2026-03-29 01:52:37.275517 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 01:52:37.275532 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 01:52:37.275544 | orchestrator | + export PATH 2026-03-29 01:52:37.275626 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 01:52:37.275640 | orchestrator | + '[' -n '' ']' 2026-03-29 01:52:37.275651 | orchestrator | + hash -r 2026-03-29 01:52:37.275661 | orchestrator | + '[' -n '' ']' 2026-03-29 01:52:37.275672 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 01:52:37.275683 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 01:52:37.275694 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 01:52:37.275705 | orchestrator | + unset -f deactivate 2026-03-29 01:52:37.275717 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-29 01:52:37.284163 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 01:52:37.284282 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 01:52:37.284302 | orchestrator | + local max_attempts=60 2026-03-29 01:52:37.284319 | orchestrator | + local name=ceph-ansible 2026-03-29 01:52:37.284369 | orchestrator | + local attempt_num=1 2026-03-29 01:52:37.284734 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 01:52:37.322412 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 01:52:37.322488 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 01:52:37.322496 | orchestrator | + local max_attempts=60 2026-03-29 01:52:37.322503 | orchestrator | + local name=kolla-ansible 2026-03-29 01:52:37.322510 | orchestrator | + local attempt_num=1 2026-03-29 01:52:37.322516 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 01:52:37.359710 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 01:52:37.359804 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 01:52:37.359820 | orchestrator | + local max_attempts=60 2026-03-29 01:52:37.359831 | orchestrator | + local name=osism-ansible 2026-03-29 01:52:37.359842 | orchestrator | + local attempt_num=1 2026-03-29 01:52:37.360536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 01:52:37.401101 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 01:52:37.401193 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 01:52:37.401210 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 01:52:38.057505 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-29 01:52:38.214373 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-29 01:52:38.214471 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214487 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214499 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-29 01:52:38.214512 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-29 01:52:38.214545 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214612 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214633 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-29 01:52:38.214650 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214669 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-29 01:52:38.214681 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214691 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-29 01:52:38.214702 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214733 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-29 01:52:38.214745 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.214755 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-29 01:52:38.219886 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 01:52:38.260741 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 01:52:38.260840 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-29 01:52:38.265451 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-29 01:52:50.379716 | orchestrator | 2026-03-29 01:52:50 | INFO  | Task 83c078da-2483-4678-a2e5-920f7bbd4d37 (resolvconf) was prepared for execution. 2026-03-29 01:52:50.379806 | orchestrator | 2026-03-29 01:52:50 | INFO  | It takes a moment until task 83c078da-2483-4678-a2e5-920f7bbd4d37 (resolvconf) has been started and output is visible here. 2026-03-29 01:53:02.782088 | orchestrator | 2026-03-29 01:53:02.782206 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-29 01:53:02.782222 | orchestrator | 2026-03-29 01:53:02.782234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:53:02.782245 | orchestrator | Sunday 29 March 2026 01:52:54 +0000 (0:00:00.099) 0:00:00.099 ********** 2026-03-29 01:53:02.782255 | orchestrator | ok: [testbed-manager] 2026-03-29 01:53:02.782265 | orchestrator | 2026-03-29 01:53:02.782275 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 01:53:02.782286 | orchestrator | Sunday 29 March 2026 01:52:57 +0000 (0:00:03.322) 0:00:03.422 ********** 2026-03-29 01:53:02.782296 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:53:02.782307 | orchestrator | 2026-03-29 01:53:02.782317 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 01:53:02.782326 | orchestrator | Sunday 29 March 2026 01:52:57 +0000 (0:00:00.062) 0:00:03.484 ********** 2026-03-29 01:53:02.782336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-29 01:53:02.782347 | orchestrator | 2026-03-29 01:53:02.782357 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 01:53:02.782366 | orchestrator | Sunday 29 March 2026 01:52:57 +0000 (0:00:00.081) 0:00:03.566 ********** 2026-03-29 01:53:02.782392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 01:53:02.782403 | orchestrator | 2026-03-29 01:53:02.782413 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 01:53:02.782423 | orchestrator | Sunday 29 March 2026 01:52:57 +0000 (0:00:00.075) 0:00:03.642 ********** 2026-03-29 01:53:02.782433 | orchestrator | ok: [testbed-manager] 2026-03-29 01:53:02.782442 | orchestrator | 2026-03-29 01:53:02.782452 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 01:53:02.782462 | orchestrator | Sunday 29 March 2026 01:52:58 +0000 (0:00:00.853) 0:00:04.495 ********** 2026-03-29 01:53:02.782471 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:53:02.782481 | orchestrator | 2026-03-29 01:53:02.782491 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 01:53:02.782500 | orchestrator | Sunday 29 March 2026 01:52:58 +0000 (0:00:00.057) 0:00:04.552 ********** 2026-03-29 01:53:02.782510 | orchestrator | ok: [testbed-manager] 2026-03-29 01:53:02.782539 | orchestrator | 2026-03-29 01:53:02.782550 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 01:53:02.782582 | orchestrator | Sunday 29 March 2026 01:52:58 +0000 (0:00:00.441) 0:00:04.994 ********** 2026-03-29 01:53:02.782594 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:53:02.782605 | orchestrator | 2026-03-29 01:53:02.782617 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 01:53:02.782629 | orchestrator | Sunday 29 March 2026 01:52:59 +0000 (0:00:00.063) 0:00:05.058 ********** 2026-03-29 01:53:02.782640 | orchestrator | changed: [testbed-manager] 2026-03-29 01:53:02.782651 | orchestrator | 2026-03-29 01:53:02.782663 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 01:53:02.782674 | orchestrator | Sunday 29 March 2026 01:52:59 +0000 (0:00:00.504) 0:00:05.563 ********** 2026-03-29 01:53:02.782685 | orchestrator | changed: [testbed-manager] 2026-03-29 01:53:02.782696 | orchestrator | 2026-03-29 01:53:02.782708 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 01:53:02.782719 | orchestrator | Sunday 29 March 2026 01:53:00 +0000 (0:00:00.997) 0:00:06.560 ********** 2026-03-29 01:53:02.782730 | orchestrator | ok: [testbed-manager] 2026-03-29 01:53:02.782742 | orchestrator | 2026-03-29 01:53:02.782754 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 01:53:02.782766 | orchestrator | Sunday 29 March 2026 01:53:01 +0000 (0:00:00.901) 0:00:07.461 ********** 2026-03-29 01:53:02.782777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-29 01:53:02.782788 | orchestrator | 2026-03-29 01:53:02.782800 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 01:53:02.782810 | orchestrator | Sunday 29 March 2026 01:53:01 +0000 (0:00:00.065) 0:00:07.527 ********** 2026-03-29 01:53:02.782821 | orchestrator | changed: [testbed-manager] 2026-03-29 01:53:02.782834 | orchestrator | 2026-03-29 01:53:02.782844 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:53:02.782857 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:53:02.782869 | orchestrator | 2026-03-29 01:53:02.782880 | orchestrator | 2026-03-29 01:53:02.782891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:53:02.782902 | orchestrator | Sunday 29 March 2026 01:53:02 +0000 (0:00:01.071) 0:00:08.598 ********** 2026-03-29 01:53:02.782913 | orchestrator | =============================================================================== 2026-03-29 01:53:02.782924 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2026-03-29 01:53:02.782935 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.07s 2026-03-29 01:53:02.782946 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2026-03-29 01:53:02.782957 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2026-03-29 01:53:02.782968 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.85s 2026-03-29 01:53:02.782980 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2026-03-29 01:53:02.783007 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.44s 2026-03-29 01:53:02.783017 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-29 01:53:02.783027 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-29 01:53:02.783037 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-29 01:53:02.783046 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-03-29 01:53:02.783056 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-29 01:53:02.783073 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-29 01:53:03.035766 | orchestrator | + osism apply sshconfig 2026-03-29 01:53:15.018672 | orchestrator | 2026-03-29 01:53:15 | INFO  | Task c5b2b469-4f5d-46ea-af4b-01c5a17cca93 (sshconfig) was prepared for execution. 2026-03-29 01:53:15.018809 | orchestrator | 2026-03-29 01:53:15 | INFO  | It takes a moment until task c5b2b469-4f5d-46ea-af4b-01c5a17cca93 (sshconfig) has been started and output is visible here. 2026-03-29 01:53:25.146739 | orchestrator | 2026-03-29 01:53:25.146867 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-29 01:53:25.146884 | orchestrator | 2026-03-29 01:53:25.146915 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-29 01:53:25.146927 | orchestrator | Sunday 29 March 2026 01:53:18 +0000 (0:00:00.115) 0:00:00.115 ********** 2026-03-29 01:53:25.146937 | orchestrator | ok: [testbed-manager] 2026-03-29 01:53:25.146948 | orchestrator | 2026-03-29 01:53:25.146958 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-29 01:53:25.146968 | orchestrator | Sunday 29 March 2026 01:53:19 +0000 (0:00:00.479) 0:00:00.595 ********** 2026-03-29 01:53:25.146983 | orchestrator | changed: [testbed-manager] 2026-03-29 01:53:25.147001 | orchestrator | 2026-03-29 01:53:25.147016 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-29 01:53:25.147032 | orchestrator | Sunday 29 March 2026 01:53:19 +0000 (0:00:00.451) 0:00:01.046 ********** 2026-03-29 01:53:25.147048 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-29 01:53:25.147063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-29 01:53:25.147079 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-29 01:53:25.147094 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-29 01:53:25.147112 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-29 01:53:25.147128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-29 01:53:25.147144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-29 01:53:25.147161 | orchestrator | 2026-03-29 01:53:25.147178 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-29 01:53:25.147194 | orchestrator | Sunday 29 March 2026 01:53:24 +0000 (0:00:04.865) 0:00:05.912 ********** 2026-03-29 01:53:25.147211 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:53:25.147224 | orchestrator | 2026-03-29 01:53:25.147234 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-29 01:53:25.147243 | orchestrator | Sunday 29 March 2026 01:53:24 +0000 (0:00:00.063) 0:00:05.975 ********** 2026-03-29 01:53:25.147253 | orchestrator | changed: [testbed-manager] 2026-03-29 01:53:25.147263 | orchestrator | 2026-03-29 01:53:25.147272 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:53:25.147285 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 01:53:25.147297 | orchestrator | 2026-03-29 01:53:25.147308 | orchestrator | 2026-03-29 01:53:25.147320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:53:25.147332 | orchestrator | Sunday 29 March 2026 01:53:24 +0000 (0:00:00.495) 0:00:06.471 ********** 2026-03-29 01:53:25.147343 | orchestrator | =============================================================================== 2026-03-29 01:53:25.147354 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.87s 2026-03-29 01:53:25.147365 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-03-29 01:53:25.147376 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-03-29 01:53:25.147387 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2026-03-29 01:53:25.147398 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-03-29 01:53:25.398817 | orchestrator | + osism apply known-hosts 2026-03-29 01:53:37.294442 | orchestrator | 2026-03-29 01:53:37 | INFO  | Task e75d1860-99bd-498d-b176-0ee10b7b009c (known-hosts) was prepared for execution. 2026-03-29 01:53:37.294532 | orchestrator | 2026-03-29 01:53:37 | INFO  | It takes a moment until task e75d1860-99bd-498d-b176-0ee10b7b009c (known-hosts) has been started and output is visible here. 2026-03-29 01:53:53.449335 | orchestrator | 2026-03-29 01:53:53.449440 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-29 01:53:53.449456 | orchestrator | 2026-03-29 01:53:53.449469 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-29 01:53:53.449481 | orchestrator | Sunday 29 March 2026 01:53:41 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-29 01:53:53.449493 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 01:53:53.449504 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 01:53:53.449515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 01:53:53.449526 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 01:53:53.449537 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 01:53:53.449548 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 01:53:53.449597 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 01:53:53.449616 | orchestrator | 2026-03-29 01:53:53.449634 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-29 01:53:53.449653 | orchestrator | Sunday 29 March 2026 01:53:47 +0000 (0:00:05.826) 0:00:05.998 ********** 2026-03-29 01:53:53.449672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 01:53:53.449691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 01:53:53.449709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 01:53:53.449725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 01:53:53.449741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 01:53:53.449771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 01:53:53.449791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 01:53:53.449808 | orchestrator | 2026-03-29 01:53:53.449826 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.449843 | orchestrator | Sunday 29 March 2026 01:53:47 +0000 (0:00:00.177) 0:00:06.176 ********** 2026-03-29 01:53:53.449862 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILABBJ8PpyKm5HDg4Sk4LKvkaLsYPjXD0SYhBl5rzDhO) 2026-03-29 01:53:53.449894 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD3WbGvi2PowsDzDOYJhuzvrRTV21rFycf42XpTK6ef9mr9mOOKHZ1YQNyHXVp72JJTT3PUiiJGzinU8kYwzAFWF+2FVm/jRDXpmgPeSq5smhAheYJHGPqYr8ksRfV9I8WkYH6xN+QY8mxFdaMTIe0e7DUlLU8n7dBVNL8yx2gELrcDBvMsc187N1rNnv9QSdu4hkB0FZF7Rhel35Gd6QoYIzDODjDAutmTvUiXgm7UlB+GFemJKLqPIq0ZSTTKmtwOIj3eUtVA2ZtvhucCy+NdmtcdSoDJ4YoM50wUqYRYqfG0h4x9ZsqUHmvWY6TixujH6cfhZ0eu8kBo48oHd7EwNq0v3tLY/OhfQYKiqAq55ryBUdFfoap9A0iTlftXL84XYL2RbVbyMoUWMaZd3UFzebclw4vDuYn4c4B7wIhGdaOCiJxJ2jzyVws/y2P3scF9RlVq6o1/LKNmaICqY1XizddbT6ysdl1U67Eqi6YX5PAkYy/kxgmhE2t3TMItNM=) 2026-03-29 01:53:53.449944 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFb4BXc8AS8hpg4seAYGJxKCfLLuKlaEzpJI8/qEQrNqAM49wv4PP51BJHumc/jNJZk+Gnf1vfEUfUWabAy1WTU=) 2026-03-29 01:53:53.449968 | orchestrator | 2026-03-29 01:53:53.449987 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.450005 | orchestrator | Sunday 29 March 2026 01:53:48 +0000 (0:00:01.139) 0:00:07.316 ********** 2026-03-29 01:53:53.450132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNCU/NRVASvWwQlW85VclGbpAmJMnv1lYGtBW9votPbGe0CME4Fn8SA0JAkyhYtDisNhbtKotv2X/FIcWX2EhaY=) 2026-03-29 01:53:53.450154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACpuPL5me6Nt+ZaEJ3q6YjDdsALhkC7BV3pHIdREyTl) 2026-03-29 01:53:53.450215 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3p0BvCNapnFg1yxqQfwcFH4Bg60XPMwok1YfBshzFUBsaNfUZIgTpNM4ZRu7R1tjOBXcqNZOTIvlSVaRgibZR846P21dRAX7RUwK2uFXQoH++Arc/QDcYlrLMbL6ZafnGy6rZaD0wdEiDLaUx8OrL4s21urcpS/kF06jZd1vqYKHo6LlRKeB9y3Jetc0HUtHG6wAZrAN4UUoMKXo2LY2dLA6u8QRWJ3jH64EI3z1R4NzpVDipR136ukkcI/w7+JuVq8UXis5PecvOYJCq312syedCcy6idm6TJgyhJtPFw11EktrN7pzkwFJn8mnGnCK1hfs5bpOBP920PDAO04MdsRXJ2Tk4oJ+XEGhSbq5TNVwsQzTxMM+d3AEAezWH+aFb7vz65rFubvzB/9mnlLl8WTN89QmP9biXgfd0JI5EWbf++xLafHkCxUNXyJD0k4s9juP1jNuyqNZZU/Pn9ix/stgpShUZRDh05YPdlgrs0WO0sLwlneNYi6ndO70q4qk=) 2026-03-29 01:53:53.450234 | orchestrator | 2026-03-29 01:53:53.450245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.450256 | orchestrator | Sunday 29 March 2026 01:53:49 +0000 (0:00:01.010) 0:00:08.326 ********** 2026-03-29 01:53:53.450267 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGTtJMolEiJVeespT1rGxayCApTNJzDuwjDKyvkpjbkHKIHE6zREeBZHw59whmB9o6dLaNorokdRs1/M3+Pn1BFACycddreh+3ir1pfhxxO0jdaDJtCjHSOvmS5Pkb7sKjpVfNUBbB4X7T8Vbet2GwPeZRUwNBI2/N+IbhshSa+TtRj3hwogQ5MxL/bZNquyAFMngL3QRUb61dxexQHsRL4f0FBLfGyqZqyWBfLMxQWXZ/PEphQg019KEQAxZ758hMvtIRbgFF3nVni9zCAPmVEB5m8PnkSb/QJyH67+31tjXccnPB52uaMt+auWiwGEIy6AVAhsVjykByqlu9PyW+mXeA7AX4DmgY7McTURJ1SnM7AdIaDQDBnCtx75y8/Wy/o82tndEJXJhn23onAQG5EMNsYZH+Oy33xiXEZKhSN3yrC1dk6Kklg710MxWJnDY/oaA3/PgQ3P/nZ78X3h5wbxAYBwf/yXCqH77Krc6tnLdvzvmQbye7UYhcl8tOSy0=) 2026-03-29 01:53:53.450279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVJqxKSZ+EhM9m0AIZpFFR9/lG9KIe3mHI3zxmRTwafuKuQq5Bhcr2nZwapvk+nOO6ELwbawuH3NNs8PJbM1kQ=) 2026-03-29 01:53:53.450290 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMsLaENx9BIGTm8SBlWFM4vo9iEXqBygCX4p2qM4ulhb) 2026-03-29 01:53:53.450302 | orchestrator | 2026-03-29 01:53:53.450312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.450323 | orchestrator | Sunday 29 March 2026 01:53:50 +0000 (0:00:00.979) 0:00:09.306 ********** 2026-03-29 01:53:53.450334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz9Uoujm+Pm3eoOkvWm/JUgX2DHTwR9t9u0dnC37Cld3DaBqk7kQTb7vTZvTH2oH+w3zVcghdtmILxPe9IBnki35jAM/hY/8Sk9U4QPVYkpoOqY54z0+oywfwYWgABu6Futwbn5m8X/B/lebBsLHjA1zOeic4UctSGNTqYdMb9gqT3Gl1ScniWUSW+4Eg1KuOgdbQg4lI2Jd1iTsw01TsKAjjui71gGjKflBbEN7kXECjLQehLHH2p4Ly+93rQFOTS2dW5gKM3esGrMjwnbpp+Iao3oxdtmbZ/PnmOa2Erx8uXO5pSJE3BWRK/PBAVY/BfyBeu12ETssHxhOBKfn7CbzDOiJgadAL/SITQKhTrTy0tgJQC7626YAJ+1L/z8pkQmB+XBM7Z+KGqGsueX2IdVfwbEmzG9nhD2Di4eIhm6hqDenA6AGnl//p/XnVYjA3dyQcXodNd1SeBCRvz3MQKk7UPgpZud21xNpcz97Cee0fG5NlswtIPabmiUdUxVH8=) 2026-03-29 01:53:53.450345 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA6mvp5vz/lF+hAeM4Ns0OekTYs5Fe8jICnghP+8A5ALioEQ6yxH+QpZesefzbmLASZpxqmm3rsf6x9hoAO281Y=) 2026-03-29 01:53:53.450368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPpcmXkqlfUmNi5bw+pueZ+Sw4Xjnov/F24v9PxWDA9) 2026-03-29 01:53:53.450379 | orchestrator | 2026-03-29 01:53:53.450390 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.450400 | orchestrator | Sunday 29 March 2026 01:53:51 +0000 (0:00:01.013) 0:00:10.320 ********** 2026-03-29 01:53:53.450484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKSB882TS4EmSd/MgRUuNBNDEPzWRWHykTUqT4IJgRgu38iuQmK1fMhHNrP6K9niDwK1PSUhV21D0971JLbFZ5fD5M9oR6abWHRxuuaAJ60b0oiowp2xCnmOEaj1zPxjYT1hbXwuQJNerU3SS4sXmx9Fu554Wi6wbnVYy7Z65b9YU+ITdkTvtwBJwZa7H5bRV3unAYFmDZSsp8c+8vkPcStSs+yhOzzMyfV8K73pPdAxBTWJ1YZvF/d2GG8DVAUbDQJCcuLjvNkmYjzIEqarLoNglTGGC4hsvYxuqF7DqDMzOS6cEkCC/6swvvtizzZ1ZsjLQX5QmoxPgY8USUKtUf82cpbWw4BmPsmMhhDfpoXDOEmtO9MwJWt8VmU0BDcr5X0dRe4OvsMVX6ZmSjbNT7DiuZVE7dHBSjALggeWJsGEiCiwgjjEhIqKom7mtl3/dPeLY2h//euJ1r6AxH1y/5xcXXyTrEKjztRRH+e78o5Kt5OHTB1Zd3nAE+km/osXc=) 2026-03-29 01:53:53.450496 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKqVj4v1t63BOKozci4Nb7Eg9ayCoyo8n/tjEDb/kIBOabZXP8bUnwfRe4OeNYjQA9Y+blhzyJY7g4Jbs/3ENg=) 2026-03-29 01:53:53.450507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOT5ZNHaX0hKcXoEsItQuF8NILAlm6mRG6w/o6eddrER) 2026-03-29 01:53:53.450518 | orchestrator | 2026-03-29 01:53:53.450529 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:53:53.450539 | orchestrator | Sunday 29 March 2026 01:53:52 +0000 (0:00:01.020) 0:00:11.340 ********** 2026-03-29 01:53:53.450582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxCz7DFxpbE6hrNGOlaXMMwQU52NyEuZzeo/GOqv7ESjq00pOanwegVD49TEY3z5TNeJMgO3yuVZQxXgbF7aQ+9neRUBRfDr92GnBNzX5UPbk6qyGDrXSgIXpf0+2ePuKgwB1PGMdIhPO4A2EaYnpz1+jmRbZvnLKFF0z9guOJaQv1a0mqHkeV9siO2aLGG5SiC2RSkIjSG9VmG/Lt/w1tSa1OQto0GFyJjOVWjAVy3ogkY9C/j4QRildNfW7rGrtrvSmlNjedXauMqfcVt06DhrWnNivDCjW4pV4rakM/QyOWeGZAcBmKyF05bMNeyqQhKNcOf4QOpNatgUVmHYG+7m/30E5fLrNrWZleDQFwc6sU4PvSwVCp2mHzRJs4s6aYNOZENKYpK3JHkqx5F7JCNsG4LRW9FQgFtbN+Fg47VZsSYhiexszTtRlJE0PIWP0f36lAMoisvdCIJZ1ZO00OSSN/jjxtapxWz9ge+1N50iDQERDTEIH1S5kEZaZXyFE=) 2026-03-29 01:54:03.617435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINkO9CqU8+8kPsl6y8kRQh2ckpPtZ2+b7zJ5hQ4BaYno) 2026-03-29 01:54:03.617679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBIpRpGIPOyvcIMGO0a6a6zfoVXWQb4GFaisyMz7LdE7SjWma0OefgtZk2ZXBPcD6YhRm1nizcQAGaWNU7bqByo=) 2026-03-29 01:54:03.617702 | orchestrator | 2026-03-29 01:54:03.617715 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:03.617727 | orchestrator | Sunday 29 March 2026 01:53:53 +0000 (0:00:01.003) 0:00:12.344 ********** 2026-03-29 01:54:03.617738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBvpv4McwYhrGwpruKV6OJ7Zv0tfXxN6EtMhkGZ6i7Eiu00SiMePk7dQSWdRI0/5udvSW2Z0IaweBFbFjUF89Wk=) 2026-03-29 01:54:03.617752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaybreQ9Bku5olYE+Lci8z6IYMdtxuBVU19RNUff1t9x7GONICK1IyjzFqpyozJxoXokIlkFjcyHZ4DSeYGcOg4/7IN75Zo2+HW827LRWrIm8TeD/iSIwQYhL2GSrx189KvBaTasCOVezAKM6H6FLA69i7CLQFFeti57u32Ske+X0fGkxF+eBMx3DsaQb5WJbDCehQ5y9Y6ajXPJvE5y7/hnT1xnAtCsJk+UWMxjLv4bF243icJ8J3RqCwjK6FZY9mQDEzvA1bs8EM+BycrFIKyi9CQmuksDhJhNM2IR5cQaFpX5qOJ6NsHPC8BIdJ0m5pCssz9JPPnb2BRahgwof3urePlERWx+Jv/DLMEO163u27gWe32PPE6y0LdnbrZ3B58qTMuvMtCB6UgBrtAAzPtizVzAl5BJAtQnNa1ZGaZA0LqcjCy2tbjxDnexDJ5yG7apQqSvOa6aCrCKtEMtiyKUekzgbQZKYBJHSB8rI2c0e3+cFAW6+jAhZeCeiXvDU=) 2026-03-29 01:54:03.617789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJKCE8TsCjLva0UTlRZCuNdG/QK0eVCr8QrHVhaOxiig) 2026-03-29 01:54:03.617801 | orchestrator | 2026-03-29 01:54:03.617812 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-29 01:54:03.617823 | orchestrator | Sunday 29 March 2026 01:53:54 +0000 (0:00:00.975) 0:00:13.319 ********** 2026-03-29 01:54:03.617835 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 01:54:03.617846 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 01:54:03.617857 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 01:54:03.617867 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 01:54:03.617878 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 01:54:03.617889 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 01:54:03.617899 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 01:54:03.617910 | orchestrator | 2026-03-29 01:54:03.617921 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-29 01:54:03.617933 | orchestrator | Sunday 29 March 2026 01:53:59 +0000 (0:00:05.169) 0:00:18.488 ********** 2026-03-29 01:54:03.617944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 01:54:03.617957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 01:54:03.617968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 01:54:03.617982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 01:54:03.617994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 01:54:03.618006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 01:54:03.618078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 01:54:03.618092 | orchestrator | 2026-03-29 01:54:03.618105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:03.618118 | orchestrator | Sunday 29 March 2026 01:53:59 +0000 (0:00:00.167) 0:00:18.656 ********** 2026-03-29 01:54:03.618131 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILABBJ8PpyKm5HDg4Sk4LKvkaLsYPjXD0SYhBl5rzDhO) 2026-03-29 01:54:03.618177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD3WbGvi2PowsDzDOYJhuzvrRTV21rFycf42XpTK6ef9mr9mOOKHZ1YQNyHXVp72JJTT3PUiiJGzinU8kYwzAFWF+2FVm/jRDXpmgPeSq5smhAheYJHGPqYr8ksRfV9I8WkYH6xN+QY8mxFdaMTIe0e7DUlLU8n7dBVNL8yx2gELrcDBvMsc187N1rNnv9QSdu4hkB0FZF7Rhel35Gd6QoYIzDODjDAutmTvUiXgm7UlB+GFemJKLqPIq0ZSTTKmtwOIj3eUtVA2ZtvhucCy+NdmtcdSoDJ4YoM50wUqYRYqfG0h4x9ZsqUHmvWY6TixujH6cfhZ0eu8kBo48oHd7EwNq0v3tLY/OhfQYKiqAq55ryBUdFfoap9A0iTlftXL84XYL2RbVbyMoUWMaZd3UFzebclw4vDuYn4c4B7wIhGdaOCiJxJ2jzyVws/y2P3scF9RlVq6o1/LKNmaICqY1XizddbT6ysdl1U67Eqi6YX5PAkYy/kxgmhE2t3TMItNM=) 2026-03-29 01:54:03.618201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFb4BXc8AS8hpg4seAYGJxKCfLLuKlaEzpJI8/qEQrNqAM49wv4PP51BJHumc/jNJZk+Gnf1vfEUfUWabAy1WTU=) 2026-03-29 01:54:03.618223 | orchestrator | 2026-03-29 01:54:03.618236 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:03.618250 | orchestrator | Sunday 29 March 2026 01:54:00 +0000 (0:00:00.970) 0:00:19.626 ********** 2026-03-29 01:54:03.618268 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNCU/NRVASvWwQlW85VclGbpAmJMnv1lYGtBW9votPbGe0CME4Fn8SA0JAkyhYtDisNhbtKotv2X/FIcWX2EhaY=) 2026-03-29 01:54:03.618282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3p0BvCNapnFg1yxqQfwcFH4Bg60XPMwok1YfBshzFUBsaNfUZIgTpNM4ZRu7R1tjOBXcqNZOTIvlSVaRgibZR846P21dRAX7RUwK2uFXQoH++Arc/QDcYlrLMbL6ZafnGy6rZaD0wdEiDLaUx8OrL4s21urcpS/kF06jZd1vqYKHo6LlRKeB9y3Jetc0HUtHG6wAZrAN4UUoMKXo2LY2dLA6u8QRWJ3jH64EI3z1R4NzpVDipR136ukkcI/w7+JuVq8UXis5PecvOYJCq312syedCcy6idm6TJgyhJtPFw11EktrN7pzkwFJn8mnGnCK1hfs5bpOBP920PDAO04MdsRXJ2Tk4oJ+XEGhSbq5TNVwsQzTxMM+d3AEAezWH+aFb7vz65rFubvzB/9mnlLl8WTN89QmP9biXgfd0JI5EWbf++xLafHkCxUNXyJD0k4s9juP1jNuyqNZZU/Pn9ix/stgpShUZRDh05YPdlgrs0WO0sLwlneNYi6ndO70q4qk=) 2026-03-29 01:54:03.618295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACpuPL5me6Nt+ZaEJ3q6YjDdsALhkC7BV3pHIdREyTl) 2026-03-29 01:54:03.618308 | orchestrator | 2026-03-29 01:54:03.618321 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:03.618333 | orchestrator | Sunday 29 March 2026 01:54:01 +0000 (0:00:00.997) 0:00:20.624 ********** 2026-03-29 01:54:03.618344 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGTtJMolEiJVeespT1rGxayCApTNJzDuwjDKyvkpjbkHKIHE6zREeBZHw59whmB9o6dLaNorokdRs1/M3+Pn1BFACycddreh+3ir1pfhxxO0jdaDJtCjHSOvmS5Pkb7sKjpVfNUBbB4X7T8Vbet2GwPeZRUwNBI2/N+IbhshSa+TtRj3hwogQ5MxL/bZNquyAFMngL3QRUb61dxexQHsRL4f0FBLfGyqZqyWBfLMxQWXZ/PEphQg019KEQAxZ758hMvtIRbgFF3nVni9zCAPmVEB5m8PnkSb/QJyH67+31tjXccnPB52uaMt+auWiwGEIy6AVAhsVjykByqlu9PyW+mXeA7AX4DmgY7McTURJ1SnM7AdIaDQDBnCtx75y8/Wy/o82tndEJXJhn23onAQG5EMNsYZH+Oy33xiXEZKhSN3yrC1dk6Kklg710MxWJnDY/oaA3/PgQ3P/nZ78X3h5wbxAYBwf/yXCqH77Krc6tnLdvzvmQbye7UYhcl8tOSy0=) 2026-03-29 01:54:03.618356 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMsLaENx9BIGTm8SBlWFM4vo9iEXqBygCX4p2qM4ulhb) 2026-03-29 01:54:03.618367 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPVJqxKSZ+EhM9m0AIZpFFR9/lG9KIe3mHI3zxmRTwafuKuQq5Bhcr2nZwapvk+nOO6ELwbawuH3NNs8PJbM1kQ=) 2026-03-29 01:54:03.618378 | orchestrator | 2026-03-29 01:54:03.618389 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:03.618400 | orchestrator | Sunday 29 March 2026 01:54:02 +0000 (0:00:00.990) 0:00:21.614 ********** 2026-03-29 01:54:03.618411 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz9Uoujm+Pm3eoOkvWm/JUgX2DHTwR9t9u0dnC37Cld3DaBqk7kQTb7vTZvTH2oH+w3zVcghdtmILxPe9IBnki35jAM/hY/8Sk9U4QPVYkpoOqY54z0+oywfwYWgABu6Futwbn5m8X/B/lebBsLHjA1zOeic4UctSGNTqYdMb9gqT3Gl1ScniWUSW+4Eg1KuOgdbQg4lI2Jd1iTsw01TsKAjjui71gGjKflBbEN7kXECjLQehLHH2p4Ly+93rQFOTS2dW5gKM3esGrMjwnbpp+Iao3oxdtmbZ/PnmOa2Erx8uXO5pSJE3BWRK/PBAVY/BfyBeu12ETssHxhOBKfn7CbzDOiJgadAL/SITQKhTrTy0tgJQC7626YAJ+1L/z8pkQmB+XBM7Z+KGqGsueX2IdVfwbEmzG9nhD2Di4eIhm6hqDenA6AGnl//p/XnVYjA3dyQcXodNd1SeBCRvz3MQKk7UPgpZud21xNpcz97Cee0fG5NlswtIPabmiUdUxVH8=) 2026-03-29 01:54:03.618422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPpcmXkqlfUmNi5bw+pueZ+Sw4Xjnov/F24v9PxWDA9) 2026-03-29 01:54:03.618442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA6mvp5vz/lF+hAeM4Ns0OekTYs5Fe8jICnghP+8A5ALioEQ6yxH+QpZesefzbmLASZpxqmm3rsf6x9hoAO281Y=) 2026-03-29 01:54:07.533689 | orchestrator | 2026-03-29 01:54:07.533819 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:07.533846 | orchestrator | Sunday 29 March 2026 01:54:03 +0000 (0:00:00.902) 0:00:22.517 ********** 2026-03-29 01:54:07.533859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKqVj4v1t63BOKozci4Nb7Eg9ayCoyo8n/tjEDb/kIBOabZXP8bUnwfRe4OeNYjQA9Y+blhzyJY7g4Jbs/3ENg=) 2026-03-29 01:54:07.533876 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKSB882TS4EmSd/MgRUuNBNDEPzWRWHykTUqT4IJgRgu38iuQmK1fMhHNrP6K9niDwK1PSUhV21D0971JLbFZ5fD5M9oR6abWHRxuuaAJ60b0oiowp2xCnmOEaj1zPxjYT1hbXwuQJNerU3SS4sXmx9Fu554Wi6wbnVYy7Z65b9YU+ITdkTvtwBJwZa7H5bRV3unAYFmDZSsp8c+8vkPcStSs+yhOzzMyfV8K73pPdAxBTWJ1YZvF/d2GG8DVAUbDQJCcuLjvNkmYjzIEqarLoNglTGGC4hsvYxuqF7DqDMzOS6cEkCC/6swvvtizzZ1ZsjLQX5QmoxPgY8USUKtUf82cpbWw4BmPsmMhhDfpoXDOEmtO9MwJWt8VmU0BDcr5X0dRe4OvsMVX6ZmSjbNT7DiuZVE7dHBSjALggeWJsGEiCiwgjjEhIqKom7mtl3/dPeLY2h//euJ1r6AxH1y/5xcXXyTrEKjztRRH+e78o5Kt5OHTB1Zd3nAE+km/osXc=) 2026-03-29 01:54:07.533890 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOT5ZNHaX0hKcXoEsItQuF8NILAlm6mRG6w/o6eddrER) 2026-03-29 01:54:07.533902 | orchestrator | 2026-03-29 01:54:07.533911 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:07.533921 | orchestrator | Sunday 29 March 2026 01:54:04 +0000 (0:00:00.920) 0:00:23.437 ********** 2026-03-29 01:54:07.533932 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxCz7DFxpbE6hrNGOlaXMMwQU52NyEuZzeo/GOqv7ESjq00pOanwegVD49TEY3z5TNeJMgO3yuVZQxXgbF7aQ+9neRUBRfDr92GnBNzX5UPbk6qyGDrXSgIXpf0+2ePuKgwB1PGMdIhPO4A2EaYnpz1+jmRbZvnLKFF0z9guOJaQv1a0mqHkeV9siO2aLGG5SiC2RSkIjSG9VmG/Lt/w1tSa1OQto0GFyJjOVWjAVy3ogkY9C/j4QRildNfW7rGrtrvSmlNjedXauMqfcVt06DhrWnNivDCjW4pV4rakM/QyOWeGZAcBmKyF05bMNeyqQhKNcOf4QOpNatgUVmHYG+7m/30E5fLrNrWZleDQFwc6sU4PvSwVCp2mHzRJs4s6aYNOZENKYpK3JHkqx5F7JCNsG4LRW9FQgFtbN+Fg47VZsSYhiexszTtRlJE0PIWP0f36lAMoisvdCIJZ1ZO00OSSN/jjxtapxWz9ge+1N50iDQERDTEIH1S5kEZaZXyFE=) 2026-03-29 01:54:07.533942 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBIpRpGIPOyvcIMGO0a6a6zfoVXWQb4GFaisyMz7LdE7SjWma0OefgtZk2ZXBPcD6YhRm1nizcQAGaWNU7bqByo=) 2026-03-29 01:54:07.533952 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINkO9CqU8+8kPsl6y8kRQh2ckpPtZ2+b7zJ5hQ4BaYno) 2026-03-29 01:54:07.533961 | orchestrator | 2026-03-29 01:54:07.533971 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 01:54:07.533981 | orchestrator | Sunday 29 March 2026 01:54:05 +0000 (0:00:00.968) 0:00:24.405 ********** 2026-03-29 01:54:07.533990 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCaybreQ9Bku5olYE+Lci8z6IYMdtxuBVU19RNUff1t9x7GONICK1IyjzFqpyozJxoXokIlkFjcyHZ4DSeYGcOg4/7IN75Zo2+HW827LRWrIm8TeD/iSIwQYhL2GSrx189KvBaTasCOVezAKM6H6FLA69i7CLQFFeti57u32Ske+X0fGkxF+eBMx3DsaQb5WJbDCehQ5y9Y6ajXPJvE5y7/hnT1xnAtCsJk+UWMxjLv4bF243icJ8J3RqCwjK6FZY9mQDEzvA1bs8EM+BycrFIKyi9CQmuksDhJhNM2IR5cQaFpX5qOJ6NsHPC8BIdJ0m5pCssz9JPPnb2BRahgwof3urePlERWx+Jv/DLMEO163u27gWe32PPE6y0LdnbrZ3B58qTMuvMtCB6UgBrtAAzPtizVzAl5BJAtQnNa1ZGaZA0LqcjCy2tbjxDnexDJ5yG7apQqSvOa6aCrCKtEMtiyKUekzgbQZKYBJHSB8rI2c0e3+cFAW6+jAhZeCeiXvDU=) 2026-03-29 01:54:07.534079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBvpv4McwYhrGwpruKV6OJ7Zv0tfXxN6EtMhkGZ6i7Eiu00SiMePk7dQSWdRI0/5udvSW2Z0IaweBFbFjUF89Wk=) 2026-03-29 01:54:07.534092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJKCE8TsCjLva0UTlRZCuNdG/QK0eVCr8QrHVhaOxiig) 2026-03-29 01:54:07.534102 | orchestrator | 2026-03-29 01:54:07.534112 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-29 01:54:07.534143 | orchestrator | Sunday 29 March 2026 01:54:06 +0000 (0:00:00.949) 0:00:25.355 ********** 2026-03-29 01:54:07.534154 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 01:54:07.534167 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 01:54:07.534178 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 01:54:07.534189 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 01:54:07.534201 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 01:54:07.534212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 01:54:07.534223 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 01:54:07.534234 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:54:07.534246 | orchestrator | 2026-03-29 01:54:07.534277 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-29 01:54:07.534288 | orchestrator | Sunday 29 March 2026 01:54:06 +0000 (0:00:00.157) 0:00:25.512 ********** 2026-03-29 01:54:07.534299 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:54:07.534310 | orchestrator | 2026-03-29 01:54:07.534321 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-29 01:54:07.534332 | orchestrator | Sunday 29 March 2026 01:54:06 +0000 (0:00:00.051) 0:00:25.564 ********** 2026-03-29 01:54:07.534343 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:54:07.534353 | orchestrator | 2026-03-29 01:54:07.534362 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-29 01:54:07.534371 | orchestrator | Sunday 29 March 2026 01:54:06 +0000 (0:00:00.048) 0:00:25.613 ********** 2026-03-29 01:54:07.534381 | orchestrator | changed: [testbed-manager] 2026-03-29 01:54:07.534390 | orchestrator | 2026-03-29 01:54:07.534412 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:54:07.534422 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:54:07.534434 | orchestrator | 2026-03-29 01:54:07.534443 | orchestrator | 2026-03-29 01:54:07.534452 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:54:07.534462 | orchestrator | Sunday 29 March 2026 01:54:07 +0000 (0:00:00.644) 0:00:26.257 ********** 2026-03-29 01:54:07.534477 | orchestrator | =============================================================================== 2026-03-29 01:54:07.534486 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.83s 2026-03-29 01:54:07.534496 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.17s 2026-03-29 01:54:07.534506 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-29 01:54:07.534515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 01:54:07.534524 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-29 01:54:07.534534 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-29 01:54:07.534543 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-29 01:54:07.534577 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-29 01:54:07.534588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-29 01:54:07.534597 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-29 01:54:07.534607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-29 01:54:07.534616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-29 01:54:07.534626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-29 01:54:07.534635 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-29 01:54:07.534644 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-03-29 01:54:07.534661 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-03-29 01:54:07.534670 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.64s 2026-03-29 01:54:07.534679 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-29 01:54:07.534689 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-29 01:54:07.534699 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-29 01:54:07.800445 | orchestrator | + osism apply squid 2026-03-29 01:54:19.717649 | orchestrator | 2026-03-29 01:54:19 | INFO  | Task cb74717f-0ecb-4aad-adf6-a16c65caf79e (squid) was prepared for execution. 2026-03-29 01:54:19.717779 | orchestrator | 2026-03-29 01:54:19 | INFO  | It takes a moment until task cb74717f-0ecb-4aad-adf6-a16c65caf79e (squid) has been started and output is visible here. 2026-03-29 01:56:12.147922 | orchestrator | 2026-03-29 01:56:12.148022 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-29 01:56:12.148034 | orchestrator | 2026-03-29 01:56:12.148041 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-29 01:56:12.148049 | orchestrator | Sunday 29 March 2026 01:54:23 +0000 (0:00:00.115) 0:00:00.115 ********** 2026-03-29 01:56:12.148057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 01:56:12.148064 | orchestrator | 2026-03-29 01:56:12.148071 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-29 01:56:12.148078 | orchestrator | Sunday 29 March 2026 01:54:23 +0000 (0:00:00.087) 0:00:00.203 ********** 2026-03-29 01:56:12.148084 | orchestrator | ok: [testbed-manager] 2026-03-29 01:56:12.148092 | orchestrator | 2026-03-29 01:56:12.148099 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-29 01:56:12.148105 | orchestrator | Sunday 29 March 2026 01:54:24 +0000 (0:00:01.106) 0:00:01.309 ********** 2026-03-29 01:56:12.148113 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-29 01:56:12.148120 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-29 01:56:12.148128 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-29 01:56:12.148134 | orchestrator | 2026-03-29 01:56:12.148141 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-29 01:56:12.148147 | orchestrator | Sunday 29 March 2026 01:54:25 +0000 (0:00:00.991) 0:00:02.301 ********** 2026-03-29 01:56:12.148154 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-29 01:56:12.148161 | orchestrator | 2026-03-29 01:56:12.148168 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-29 01:56:12.148174 | orchestrator | Sunday 29 March 2026 01:54:26 +0000 (0:00:00.947) 0:00:03.249 ********** 2026-03-29 01:56:12.148181 | orchestrator | ok: [testbed-manager] 2026-03-29 01:56:12.148188 | orchestrator | 2026-03-29 01:56:12.148194 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-29 01:56:12.148201 | orchestrator | Sunday 29 March 2026 01:54:26 +0000 (0:00:00.333) 0:00:03.583 ********** 2026-03-29 01:56:12.148208 | orchestrator | changed: [testbed-manager] 2026-03-29 01:56:12.148215 | orchestrator | 2026-03-29 01:56:12.148222 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-29 01:56:12.148229 | orchestrator | Sunday 29 March 2026 01:54:27 +0000 (0:00:00.842) 0:00:04.425 ********** 2026-03-29 01:56:12.148235 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-29 01:56:12.148243 | orchestrator | ok: [testbed-manager] 2026-03-29 01:56:12.148254 | orchestrator | 2026-03-29 01:56:12.148261 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-29 01:56:12.148267 | orchestrator | Sunday 29 March 2026 01:54:59 +0000 (0:00:31.351) 0:00:35.777 ********** 2026-03-29 01:56:12.148294 | orchestrator | changed: [testbed-manager] 2026-03-29 01:56:12.148301 | orchestrator | 2026-03-29 01:56:12.148308 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-29 01:56:12.148314 | orchestrator | Sunday 29 March 2026 01:55:11 +0000 (0:00:12.056) 0:00:47.833 ********** 2026-03-29 01:56:12.148321 | orchestrator | Pausing for 60 seconds 2026-03-29 01:56:12.148328 | orchestrator | changed: [testbed-manager] 2026-03-29 01:56:12.148334 | orchestrator | 2026-03-29 01:56:12.148341 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-29 01:56:12.148348 | orchestrator | Sunday 29 March 2026 01:56:11 +0000 (0:01:00.087) 0:01:47.921 ********** 2026-03-29 01:56:12.148354 | orchestrator | ok: [testbed-manager] 2026-03-29 01:56:12.148360 | orchestrator | 2026-03-29 01:56:12.148367 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-29 01:56:12.148374 | orchestrator | Sunday 29 March 2026 01:56:11 +0000 (0:00:00.056) 0:01:47.978 ********** 2026-03-29 01:56:12.148380 | orchestrator | changed: [testbed-manager] 2026-03-29 01:56:12.148387 | orchestrator | 2026-03-29 01:56:12.148393 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:56:12.148400 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:56:12.148406 | orchestrator | 2026-03-29 01:56:12.148413 | orchestrator | 2026-03-29 01:56:12.148419 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:56:12.148426 | orchestrator | Sunday 29 March 2026 01:56:11 +0000 (0:00:00.608) 0:01:48.586 ********** 2026-03-29 01:56:12.148433 | orchestrator | =============================================================================== 2026-03-29 01:56:12.148439 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-29 01:56:12.148446 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.35s 2026-03-29 01:56:12.148453 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.06s 2026-03-29 01:56:12.148473 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.11s 2026-03-29 01:56:12.148480 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.99s 2026-03-29 01:56:12.148487 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2026-03-29 01:56:12.148494 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.84s 2026-03-29 01:56:12.148501 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-03-29 01:56:12.148508 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-29 01:56:12.148514 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-29 01:56:12.148521 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-29 01:56:12.445390 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 01:56:12.446129 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 01:56:12.494086 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:56:12.494171 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-29 01:56:12.498685 | orchestrator | + set -e 2026-03-29 01:56:12.498783 | orchestrator | + NAMESPACE=kolla/release 2026-03-29 01:56:12.498804 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-29 01:56:12.504751 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-29 01:56:12.565455 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-29 01:56:12.566101 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-29 01:56:24.702835 | orchestrator | 2026-03-29 01:56:24 | INFO  | Task 93b4fa00-1859-4ab1-9a56-d2c1a699dd6b (operator) was prepared for execution. 2026-03-29 01:56:24.702948 | orchestrator | 2026-03-29 01:56:24 | INFO  | It takes a moment until task 93b4fa00-1859-4ab1-9a56-d2c1a699dd6b (operator) has been started and output is visible here. 2026-03-29 01:56:40.517426 | orchestrator | 2026-03-29 01:56:40.517605 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-29 01:56:40.517631 | orchestrator | 2026-03-29 01:56:40.517647 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:56:40.517662 | orchestrator | Sunday 29 March 2026 01:56:28 +0000 (0:00:00.104) 0:00:00.104 ********** 2026-03-29 01:56:40.517676 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:56:40.517690 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:56:40.517704 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:56:40.517717 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:56:40.517731 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:56:40.517744 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:56:40.517758 | orchestrator | 2026-03-29 01:56:40.517771 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-29 01:56:40.517784 | orchestrator | Sunday 29 March 2026 01:56:32 +0000 (0:00:03.464) 0:00:03.569 ********** 2026-03-29 01:56:40.517798 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:56:40.517813 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:56:40.517826 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:56:40.517840 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:56:40.517854 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:56:40.517868 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:56:40.517879 | orchestrator | 2026-03-29 01:56:40.517887 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-29 01:56:40.517895 | orchestrator | 2026-03-29 01:56:40.517903 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 01:56:40.517911 | orchestrator | Sunday 29 March 2026 01:56:32 +0000 (0:00:00.856) 0:00:04.426 ********** 2026-03-29 01:56:40.517919 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:56:40.517927 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:56:40.517935 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:56:40.517943 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:56:40.517950 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:56:40.517960 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:56:40.517970 | orchestrator | 2026-03-29 01:56:40.517979 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 01:56:40.517988 | orchestrator | Sunday 29 March 2026 01:56:33 +0000 (0:00:00.159) 0:00:04.586 ********** 2026-03-29 01:56:40.518013 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:56:40.518080 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:56:40.518094 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:56:40.518108 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:56:40.518122 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:56:40.518137 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:56:40.518151 | orchestrator | 2026-03-29 01:56:40.518166 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 01:56:40.518180 | orchestrator | Sunday 29 March 2026 01:56:33 +0000 (0:00:00.154) 0:00:04.740 ********** 2026-03-29 01:56:40.518195 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:40.518210 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:40.518225 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:40.518241 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:40.518255 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:40.518267 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:40.518276 | orchestrator | 2026-03-29 01:56:40.518286 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 01:56:40.518295 | orchestrator | Sunday 29 March 2026 01:56:33 +0000 (0:00:00.612) 0:00:05.352 ********** 2026-03-29 01:56:40.518303 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:40.518311 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:40.518319 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:40.518327 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:40.518335 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:40.518342 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:40.518353 | orchestrator | 2026-03-29 01:56:40.518367 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 01:56:40.518404 | orchestrator | Sunday 29 March 2026 01:56:34 +0000 (0:00:00.838) 0:00:06.190 ********** 2026-03-29 01:56:40.518418 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-29 01:56:40.518431 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-29 01:56:40.518444 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-29 01:56:40.518457 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-29 01:56:40.518469 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-29 01:56:40.518482 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-29 01:56:40.518495 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-29 01:56:40.518508 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-29 01:56:40.518521 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-29 01:56:40.518534 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-29 01:56:40.518547 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-29 01:56:40.518587 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-29 01:56:40.518601 | orchestrator | 2026-03-29 01:56:40.518614 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 01:56:40.518628 | orchestrator | Sunday 29 March 2026 01:56:35 +0000 (0:00:01.260) 0:00:07.451 ********** 2026-03-29 01:56:40.518641 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:40.518653 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:40.518665 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:40.518674 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:40.518681 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:40.518689 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:40.518696 | orchestrator | 2026-03-29 01:56:40.518705 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 01:56:40.518713 | orchestrator | Sunday 29 March 2026 01:56:37 +0000 (0:00:01.241) 0:00:08.693 ********** 2026-03-29 01:56:40.518721 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-29 01:56:40.518729 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-29 01:56:40.518737 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-29 01:56:40.518745 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518773 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518782 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518789 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518797 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518805 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 01:56:40.518813 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518820 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518828 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518836 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518844 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518851 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-29 01:56:40.518859 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518867 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518874 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518882 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518903 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518931 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-29 01:56:40.518940 | orchestrator | 2026-03-29 01:56:40.518955 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 01:56:40.518969 | orchestrator | Sunday 29 March 2026 01:56:38 +0000 (0:00:01.310) 0:00:10.003 ********** 2026-03-29 01:56:40.518982 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:40.518994 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:40.519006 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:40.519020 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:40.519032 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:40.519044 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:40.519057 | orchestrator | 2026-03-29 01:56:40.519069 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 01:56:40.519081 | orchestrator | Sunday 29 March 2026 01:56:38 +0000 (0:00:00.137) 0:00:10.140 ********** 2026-03-29 01:56:40.519095 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:40.519108 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:40.519120 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:40.519133 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:40.519146 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:40.519160 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:40.519173 | orchestrator | 2026-03-29 01:56:40.519188 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 01:56:40.519202 | orchestrator | Sunday 29 March 2026 01:56:38 +0000 (0:00:00.161) 0:00:10.302 ********** 2026-03-29 01:56:40.519216 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:40.519230 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:40.519239 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:40.519246 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:40.519254 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:40.519262 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:40.519270 | orchestrator | 2026-03-29 01:56:40.519278 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 01:56:40.519286 | orchestrator | Sunday 29 March 2026 01:56:39 +0000 (0:00:00.618) 0:00:10.921 ********** 2026-03-29 01:56:40.519293 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:40.519301 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:40.519309 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:40.519316 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:40.519324 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:40.519332 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:40.519340 | orchestrator | 2026-03-29 01:56:40.519348 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 01:56:40.519355 | orchestrator | Sunday 29 March 2026 01:56:39 +0000 (0:00:00.151) 0:00:11.072 ********** 2026-03-29 01:56:40.519363 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 01:56:40.519383 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:40.519391 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:56:40.519400 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 01:56:40.519407 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:40.519415 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:40.519423 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 01:56:40.519430 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 01:56:40.519438 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:40.519446 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:40.519453 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 01:56:40.519461 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:40.519469 | orchestrator | 2026-03-29 01:56:40.519476 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 01:56:40.519484 | orchestrator | Sunday 29 March 2026 01:56:40 +0000 (0:00:00.704) 0:00:11.777 ********** 2026-03-29 01:56:40.519499 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:40.519507 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:40.519515 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:40.519522 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:40.519530 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:40.519538 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:40.519545 | orchestrator | 2026-03-29 01:56:40.519608 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 01:56:40.519618 | orchestrator | Sunday 29 March 2026 01:56:40 +0000 (0:00:00.134) 0:00:11.912 ********** 2026-03-29 01:56:40.519626 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:40.519633 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:40.519641 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:40.519649 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:40.519665 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:41.897210 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:41.897332 | orchestrator | 2026-03-29 01:56:41.897350 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 01:56:41.897364 | orchestrator | Sunday 29 March 2026 01:56:40 +0000 (0:00:00.140) 0:00:12.052 ********** 2026-03-29 01:56:41.897376 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:41.897387 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:41.897398 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:41.897409 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:41.897419 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:41.897430 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:41.897441 | orchestrator | 2026-03-29 01:56:41.897452 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 01:56:41.897463 | orchestrator | Sunday 29 March 2026 01:56:40 +0000 (0:00:00.153) 0:00:12.205 ********** 2026-03-29 01:56:41.897474 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:56:41.897485 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:56:41.897495 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:56:41.897506 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:56:41.897517 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:56:41.897527 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:56:41.897538 | orchestrator | 2026-03-29 01:56:41.897549 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 01:56:41.897618 | orchestrator | Sunday 29 March 2026 01:56:41 +0000 (0:00:00.744) 0:00:12.950 ********** 2026-03-29 01:56:41.897629 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:56:41.897640 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:56:41.897651 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:56:41.897662 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:56:41.897673 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:56:41.897684 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:56:41.897694 | orchestrator | 2026-03-29 01:56:41.897705 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:56:41.897736 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897751 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897763 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897775 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897788 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897821 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:56:41.897834 | orchestrator | 2026-03-29 01:56:41.897846 | orchestrator | 2026-03-29 01:56:41.897859 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:56:41.897872 | orchestrator | Sunday 29 March 2026 01:56:41 +0000 (0:00:00.250) 0:00:13.200 ********** 2026-03-29 01:56:41.897888 | orchestrator | =============================================================================== 2026-03-29 01:56:41.897907 | orchestrator | Gathering Facts --------------------------------------------------------- 3.46s 2026-03-29 01:56:41.897925 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2026-03-29 01:56:41.897977 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.26s 2026-03-29 01:56:41.897994 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2026-03-29 01:56:41.898013 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2026-03-29 01:56:41.898102 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-03-29 01:56:41.898116 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.74s 2026-03-29 01:56:41.898127 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-03-29 01:56:41.898138 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-03-29 01:56:41.898149 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-03-29 01:56:41.898159 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-03-29 01:56:41.898170 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-03-29 01:56:41.898181 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-29 01:56:41.898192 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-29 01:56:41.898202 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-29 01:56:41.898213 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-03-29 01:56:41.898224 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-29 01:56:41.898235 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-03-29 01:56:41.898245 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-03-29 01:56:42.145225 | orchestrator | + osism apply --environment custom facts 2026-03-29 01:56:43.991886 | orchestrator | 2026-03-29 01:56:43 | INFO  | Trying to run play facts in environment custom 2026-03-29 01:56:54.082379 | orchestrator | 2026-03-29 01:56:54 | INFO  | Task dd23b000-1700-4cef-80fc-36bd7f32c903 (facts) was prepared for execution. 2026-03-29 01:56:54.082477 | orchestrator | 2026-03-29 01:56:54 | INFO  | It takes a moment until task dd23b000-1700-4cef-80fc-36bd7f32c903 (facts) has been started and output is visible here. 2026-03-29 01:57:40.919677 | orchestrator | 2026-03-29 01:57:40.919795 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-29 01:57:40.919812 | orchestrator | 2026-03-29 01:57:40.919824 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 01:57:40.919836 | orchestrator | Sunday 29 March 2026 01:56:57 +0000 (0:00:00.081) 0:00:00.081 ********** 2026-03-29 01:57:40.919847 | orchestrator | ok: [testbed-manager] 2026-03-29 01:57:40.919860 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.919871 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:57:40.919882 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:57:40.919893 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.919904 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:57:40.919914 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.919948 | orchestrator | 2026-03-29 01:57:40.919960 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-29 01:57:40.919971 | orchestrator | Sunday 29 March 2026 01:56:59 +0000 (0:00:01.360) 0:00:01.442 ********** 2026-03-29 01:57:40.919982 | orchestrator | ok: [testbed-manager] 2026-03-29 01:57:40.919993 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.920004 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:57:40.920014 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.920025 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:57:40.920035 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:57:40.920046 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.920056 | orchestrator | 2026-03-29 01:57:40.920067 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-29 01:57:40.920078 | orchestrator | 2026-03-29 01:57:40.920089 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 01:57:40.920100 | orchestrator | Sunday 29 March 2026 01:57:00 +0000 (0:00:01.165) 0:00:02.607 ********** 2026-03-29 01:57:40.920110 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.920121 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.920132 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.920143 | orchestrator | 2026-03-29 01:57:40.920153 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 01:57:40.920165 | orchestrator | Sunday 29 March 2026 01:57:00 +0000 (0:00:00.089) 0:00:02.696 ********** 2026-03-29 01:57:40.920178 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.920190 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.920203 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.920214 | orchestrator | 2026-03-29 01:57:40.920227 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 01:57:40.920240 | orchestrator | Sunday 29 March 2026 01:57:00 +0000 (0:00:00.184) 0:00:02.880 ********** 2026-03-29 01:57:40.920252 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.920265 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.920277 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.920288 | orchestrator | 2026-03-29 01:57:40.920301 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 01:57:40.920314 | orchestrator | Sunday 29 March 2026 01:57:00 +0000 (0:00:00.210) 0:00:03.090 ********** 2026-03-29 01:57:40.920327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:57:40.920341 | orchestrator | 2026-03-29 01:57:40.920354 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 01:57:40.920367 | orchestrator | Sunday 29 March 2026 01:57:01 +0000 (0:00:00.125) 0:00:03.216 ********** 2026-03-29 01:57:40.920379 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.920391 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.920403 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.920415 | orchestrator | 2026-03-29 01:57:40.920427 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 01:57:40.920440 | orchestrator | Sunday 29 March 2026 01:57:01 +0000 (0:00:00.465) 0:00:03.682 ********** 2026-03-29 01:57:40.920453 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:57:40.920466 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:57:40.920478 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:57:40.920491 | orchestrator | 2026-03-29 01:57:40.920503 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 01:57:40.920515 | orchestrator | Sunday 29 March 2026 01:57:01 +0000 (0:00:00.116) 0:00:03.798 ********** 2026-03-29 01:57:40.920526 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.920537 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.920548 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.920586 | orchestrator | 2026-03-29 01:57:40.920599 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 01:57:40.920618 | orchestrator | Sunday 29 March 2026 01:57:02 +0000 (0:00:01.068) 0:00:04.867 ********** 2026-03-29 01:57:40.920629 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.920639 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.920650 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.920661 | orchestrator | 2026-03-29 01:57:40.920671 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 01:57:40.920682 | orchestrator | Sunday 29 March 2026 01:57:03 +0000 (0:00:00.483) 0:00:05.350 ********** 2026-03-29 01:57:40.920692 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.920703 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.920714 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.920724 | orchestrator | 2026-03-29 01:57:40.920735 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 01:57:40.920794 | orchestrator | Sunday 29 March 2026 01:57:04 +0000 (0:00:01.168) 0:00:06.519 ********** 2026-03-29 01:57:40.920807 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.920818 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.920828 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.920839 | orchestrator | 2026-03-29 01:57:40.920850 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-29 01:57:40.920860 | orchestrator | Sunday 29 March 2026 01:57:22 +0000 (0:00:18.062) 0:00:24.582 ********** 2026-03-29 01:57:40.920871 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:57:40.920882 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:57:40.920892 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:57:40.920906 | orchestrator | 2026-03-29 01:57:40.920923 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-29 01:57:40.920963 | orchestrator | Sunday 29 March 2026 01:57:22 +0000 (0:00:00.095) 0:00:24.678 ********** 2026-03-29 01:57:40.920983 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:57:40.921001 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:57:40.921014 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:57:40.921025 | orchestrator | 2026-03-29 01:57:40.921088 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 01:57:40.921102 | orchestrator | Sunday 29 March 2026 01:57:31 +0000 (0:00:08.822) 0:00:33.501 ********** 2026-03-29 01:57:40.921113 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.921124 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.921135 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.921145 | orchestrator | 2026-03-29 01:57:40.921156 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 01:57:40.921167 | orchestrator | Sunday 29 March 2026 01:57:31 +0000 (0:00:00.465) 0:00:33.966 ********** 2026-03-29 01:57:40.921178 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-29 01:57:40.921189 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-29 01:57:40.921200 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-29 01:57:40.921211 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-29 01:57:40.921221 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-29 01:57:40.921239 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-29 01:57:40.921250 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-29 01:57:40.921260 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-29 01:57:40.921271 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-29 01:57:40.921282 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-29 01:57:40.921293 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-29 01:57:40.921303 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-29 01:57:40.921314 | orchestrator | 2026-03-29 01:57:40.921325 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 01:57:40.921345 | orchestrator | Sunday 29 March 2026 01:57:35 +0000 (0:00:03.677) 0:00:37.644 ********** 2026-03-29 01:57:40.921356 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.921366 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.921377 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.921388 | orchestrator | 2026-03-29 01:57:40.921399 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 01:57:40.921409 | orchestrator | 2026-03-29 01:57:40.921420 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 01:57:40.921431 | orchestrator | Sunday 29 March 2026 01:57:37 +0000 (0:00:01.585) 0:00:39.230 ********** 2026-03-29 01:57:40.921442 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:57:40.921453 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:57:40.921463 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:57:40.921474 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:57:40.921486 | orchestrator | ok: [testbed-manager] 2026-03-29 01:57:40.921496 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:57:40.921507 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:57:40.921518 | orchestrator | 2026-03-29 01:57:40.921529 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:57:40.921540 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:57:40.921551 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:57:40.921611 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:57:40.921632 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:57:40.921653 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:57:40.921672 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:57:40.921683 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:57:40.921694 | orchestrator | 2026-03-29 01:57:40.921705 | orchestrator | 2026-03-29 01:57:40.921716 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:57:40.921727 | orchestrator | Sunday 29 March 2026 01:57:40 +0000 (0:00:03.762) 0:00:42.993 ********** 2026-03-29 01:57:40.921738 | orchestrator | =============================================================================== 2026-03-29 01:57:40.921748 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.06s 2026-03-29 01:57:40.921759 | orchestrator | Install required packages (Debian) -------------------------------------- 8.82s 2026-03-29 01:57:40.921770 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2026-03-29 01:57:40.921781 | orchestrator | Copy fact files --------------------------------------------------------- 3.68s 2026-03-29 01:57:40.921791 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.59s 2026-03-29 01:57:40.921802 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-03-29 01:57:40.921822 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.17s 2026-03-29 01:57:41.148250 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-03-29 01:57:41.148354 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-03-29 01:57:41.148369 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-03-29 01:57:41.148381 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-03-29 01:57:41.148418 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-29 01:57:41.148429 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-03-29 01:57:41.148440 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-03-29 01:57:41.148451 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-29 01:57:41.148463 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-03-29 01:57:41.148474 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-29 01:57:41.148485 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-29 01:57:41.457343 | orchestrator | + osism apply bootstrap 2026-03-29 01:57:53.555285 | orchestrator | 2026-03-29 01:57:53 | INFO  | Task 4930bfaf-87c4-40cc-8c31-3bd1301fa324 (bootstrap) was prepared for execution. 2026-03-29 01:57:53.555397 | orchestrator | 2026-03-29 01:57:53 | INFO  | It takes a moment until task 4930bfaf-87c4-40cc-8c31-3bd1301fa324 (bootstrap) has been started and output is visible here. 2026-03-29 01:58:09.682385 | orchestrator | 2026-03-29 01:58:09.682481 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-29 01:58:09.682491 | orchestrator | 2026-03-29 01:58:09.682498 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-29 01:58:09.682505 | orchestrator | Sunday 29 March 2026 01:57:57 +0000 (0:00:00.148) 0:00:00.148 ********** 2026-03-29 01:58:09.682511 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:09.682519 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:09.682525 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:09.682532 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:09.682538 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:09.682544 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:09.682550 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:09.682556 | orchestrator | 2026-03-29 01:58:09.682641 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 01:58:09.682648 | orchestrator | 2026-03-29 01:58:09.682654 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 01:58:09.682660 | orchestrator | Sunday 29 March 2026 01:57:58 +0000 (0:00:00.274) 0:00:00.422 ********** 2026-03-29 01:58:09.682666 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:09.682672 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:09.682678 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:09.682685 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:09.682691 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:09.682696 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:09.682702 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:09.682708 | orchestrator | 2026-03-29 01:58:09.682714 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-29 01:58:09.682720 | orchestrator | 2026-03-29 01:58:09.682726 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 01:58:09.682732 | orchestrator | Sunday 29 March 2026 01:58:01 +0000 (0:00:03.707) 0:00:04.130 ********** 2026-03-29 01:58:09.682739 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 01:58:09.682746 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 01:58:09.682752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-29 01:58:09.682758 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 01:58:09.682764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 01:58:09.682770 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 01:58:09.682776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-29 01:58:09.682782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-29 01:58:09.682788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 01:58:09.682813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 01:58:09.682820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-29 01:58:09.682826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 01:58:09.682831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-29 01:58:09.682837 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-29 01:58:09.682843 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 01:58:09.682849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 01:58:09.682856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 01:58:09.682862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 01:58:09.682868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 01:58:09.682873 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-29 01:58:09.682879 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:09.682885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 01:58:09.682891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-29 01:58:09.682897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 01:58:09.682903 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:09.682909 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-29 01:58:09.682916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-29 01:58:09.682922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 01:58:09.682928 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-29 01:58:09.682934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 01:58:09.682940 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:09.682946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 01:58:09.682952 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-29 01:58:09.682958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 01:58:09.682964 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 01:58:09.682970 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-29 01:58:09.682976 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-29 01:58:09.682982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 01:58:09.682988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 01:58:09.682994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-29 01:58:09.683000 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:09.683006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-29 01:58:09.683012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 01:58:09.683019 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 01:58:09.683025 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-29 01:58:09.683031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 01:58:09.683050 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 01:58:09.683056 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-29 01:58:09.683062 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 01:58:09.683068 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:09.683075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 01:58:09.683080 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:09.683086 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 01:58:09.683092 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 01:58:09.683098 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 01:58:09.683123 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:09.683129 | orchestrator | 2026-03-29 01:58:09.683135 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-29 01:58:09.683141 | orchestrator | 2026-03-29 01:58:09.683147 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-29 01:58:09.683153 | orchestrator | Sunday 29 March 2026 01:58:02 +0000 (0:00:00.462) 0:00:04.592 ********** 2026-03-29 01:58:09.683159 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:09.683165 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:09.683171 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:09.683177 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:09.683183 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:09.683189 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:09.683195 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:09.683201 | orchestrator | 2026-03-29 01:58:09.683207 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-29 01:58:09.683213 | orchestrator | Sunday 29 March 2026 01:58:03 +0000 (0:00:01.241) 0:00:05.834 ********** 2026-03-29 01:58:09.683220 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:09.683226 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:09.683232 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:09.683238 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:09.683244 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:09.683250 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:09.683256 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:09.683262 | orchestrator | 2026-03-29 01:58:09.683268 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-29 01:58:09.683274 | orchestrator | Sunday 29 March 2026 01:58:04 +0000 (0:00:01.297) 0:00:07.132 ********** 2026-03-29 01:58:09.683281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:09.683289 | orchestrator | 2026-03-29 01:58:09.683296 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-29 01:58:09.683302 | orchestrator | Sunday 29 March 2026 01:58:05 +0000 (0:00:00.290) 0:00:07.422 ********** 2026-03-29 01:58:09.683308 | orchestrator | changed: [testbed-manager] 2026-03-29 01:58:09.683314 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:09.683320 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:09.683326 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:09.683332 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:09.683338 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:09.683345 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:09.683351 | orchestrator | 2026-03-29 01:58:09.683356 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-29 01:58:09.683362 | orchestrator | Sunday 29 March 2026 01:58:07 +0000 (0:00:02.163) 0:00:09.585 ********** 2026-03-29 01:58:09.683368 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:09.683375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:09.683383 | orchestrator | 2026-03-29 01:58:09.683389 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-29 01:58:09.683395 | orchestrator | Sunday 29 March 2026 01:58:07 +0000 (0:00:00.258) 0:00:09.844 ********** 2026-03-29 01:58:09.683401 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:09.683407 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:09.683413 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:09.683419 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:09.683425 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:09.683431 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:09.683437 | orchestrator | 2026-03-29 01:58:09.683448 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-29 01:58:09.683455 | orchestrator | Sunday 29 March 2026 01:58:08 +0000 (0:00:01.043) 0:00:10.888 ********** 2026-03-29 01:58:09.683461 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:09.683467 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:09.683473 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:09.683479 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:09.683485 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:09.683491 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:09.683497 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:09.683503 | orchestrator | 2026-03-29 01:58:09.683509 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-29 01:58:09.683515 | orchestrator | Sunday 29 March 2026 01:58:09 +0000 (0:00:00.634) 0:00:11.522 ********** 2026-03-29 01:58:09.683521 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:09.683527 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:09.683533 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:09.683539 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:09.683549 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:09.683555 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:09.683573 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:09.683580 | orchestrator | 2026-03-29 01:58:09.683587 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 01:58:09.683595 | orchestrator | Sunday 29 March 2026 01:58:09 +0000 (0:00:00.426) 0:00:11.949 ********** 2026-03-29 01:58:09.683601 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:09.683607 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:09.683616 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:21.845054 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:21.845167 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:21.845180 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:21.845190 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:21.845200 | orchestrator | 2026-03-29 01:58:21.845211 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 01:58:21.845222 | orchestrator | Sunday 29 March 2026 01:58:09 +0000 (0:00:00.207) 0:00:12.156 ********** 2026-03-29 01:58:21.845234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:21.845260 | orchestrator | 2026-03-29 01:58:21.845270 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 01:58:21.845281 | orchestrator | Sunday 29 March 2026 01:58:10 +0000 (0:00:00.273) 0:00:12.430 ********** 2026-03-29 01:58:21.845291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:21.845301 | orchestrator | 2026-03-29 01:58:21.845310 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 01:58:21.845320 | orchestrator | Sunday 29 March 2026 01:58:10 +0000 (0:00:00.292) 0:00:12.723 ********** 2026-03-29 01:58:21.845329 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.845340 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.845350 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.845359 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.845383 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.845400 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.845415 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.845431 | orchestrator | 2026-03-29 01:58:21.845448 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 01:58:21.845465 | orchestrator | Sunday 29 March 2026 01:58:11 +0000 (0:00:01.542) 0:00:14.265 ********** 2026-03-29 01:58:21.845498 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:21.845509 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:21.845518 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:21.845527 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:21.845537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:21.845548 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:21.845613 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:21.845627 | orchestrator | 2026-03-29 01:58:21.845639 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 01:58:21.845650 | orchestrator | Sunday 29 March 2026 01:58:12 +0000 (0:00:00.260) 0:00:14.525 ********** 2026-03-29 01:58:21.845661 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.845672 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.845683 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.845694 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.845705 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.845715 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.845725 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.845736 | orchestrator | 2026-03-29 01:58:21.845747 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 01:58:21.845758 | orchestrator | Sunday 29 March 2026 01:58:12 +0000 (0:00:00.602) 0:00:15.128 ********** 2026-03-29 01:58:21.845769 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:21.845780 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:21.845791 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:21.845802 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:21.845813 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:21.845823 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:21.845834 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:21.845845 | orchestrator | 2026-03-29 01:58:21.845857 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 01:58:21.845869 | orchestrator | Sunday 29 March 2026 01:58:13 +0000 (0:00:00.298) 0:00:15.426 ********** 2026-03-29 01:58:21.845880 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.845891 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:21.845902 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:21.845913 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:21.845924 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:21.845935 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:21.845946 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:21.845955 | orchestrator | 2026-03-29 01:58:21.845964 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 01:58:21.845974 | orchestrator | Sunday 29 March 2026 01:58:13 +0000 (0:00:00.617) 0:00:16.044 ********** 2026-03-29 01:58:21.845984 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.845993 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:21.846002 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:21.846068 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:21.846082 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:21.846105 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:21.846124 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:21.846134 | orchestrator | 2026-03-29 01:58:21.846143 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 01:58:21.846153 | orchestrator | Sunday 29 March 2026 01:58:14 +0000 (0:00:01.127) 0:00:17.172 ********** 2026-03-29 01:58:21.846162 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.846171 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.846190 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.846200 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.846209 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.846219 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.846228 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.846237 | orchestrator | 2026-03-29 01:58:21.846247 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 01:58:21.846269 | orchestrator | Sunday 29 March 2026 01:58:15 +0000 (0:00:01.046) 0:00:18.218 ********** 2026-03-29 01:58:21.846312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:21.846331 | orchestrator | 2026-03-29 01:58:21.846347 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 01:58:21.846358 | orchestrator | Sunday 29 March 2026 01:58:16 +0000 (0:00:00.291) 0:00:18.510 ********** 2026-03-29 01:58:21.846367 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:21.846377 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:21.846386 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:21.846396 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:58:21.846405 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:58:21.846414 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:21.846423 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:58:21.846433 | orchestrator | 2026-03-29 01:58:21.846442 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 01:58:21.846452 | orchestrator | Sunday 29 March 2026 01:58:17 +0000 (0:00:01.254) 0:00:19.765 ********** 2026-03-29 01:58:21.846461 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.846471 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.846480 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.846489 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.846499 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.846508 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.846518 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.846527 | orchestrator | 2026-03-29 01:58:21.846536 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 01:58:21.846546 | orchestrator | Sunday 29 March 2026 01:58:17 +0000 (0:00:00.200) 0:00:19.965 ********** 2026-03-29 01:58:21.846556 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.846599 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.846614 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.846624 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.846633 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.846642 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.846651 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.846661 | orchestrator | 2026-03-29 01:58:21.846670 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 01:58:21.846680 | orchestrator | Sunday 29 March 2026 01:58:17 +0000 (0:00:00.196) 0:00:20.162 ********** 2026-03-29 01:58:21.846689 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.846698 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.846707 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.846717 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.846726 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.846735 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.846744 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.846754 | orchestrator | 2026-03-29 01:58:21.846763 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 01:58:21.846772 | orchestrator | Sunday 29 March 2026 01:58:17 +0000 (0:00:00.197) 0:00:20.360 ********** 2026-03-29 01:58:21.846783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:58:21.846794 | orchestrator | 2026-03-29 01:58:21.846804 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 01:58:21.846813 | orchestrator | Sunday 29 March 2026 01:58:18 +0000 (0:00:00.258) 0:00:20.619 ********** 2026-03-29 01:58:21.846823 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.846832 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.846850 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.846860 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.846869 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.846878 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.846888 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.846899 | orchestrator | 2026-03-29 01:58:21.846918 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 01:58:21.846942 | orchestrator | Sunday 29 March 2026 01:58:18 +0000 (0:00:00.535) 0:00:21.154 ********** 2026-03-29 01:58:21.846956 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:58:21.846973 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:58:21.846989 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:58:21.847002 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:58:21.847015 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:58:21.847028 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:58:21.847041 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:58:21.847055 | orchestrator | 2026-03-29 01:58:21.847070 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 01:58:21.847084 | orchestrator | Sunday 29 March 2026 01:58:18 +0000 (0:00:00.192) 0:00:21.347 ********** 2026-03-29 01:58:21.847100 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.847116 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.847132 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.847146 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.847162 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:58:21.847175 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:58:21.847189 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:58:21.847205 | orchestrator | 2026-03-29 01:58:21.847222 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 01:58:21.847239 | orchestrator | Sunday 29 March 2026 01:58:20 +0000 (0:00:01.065) 0:00:22.412 ********** 2026-03-29 01:58:21.847255 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.847270 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.847286 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.847302 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.847317 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:58:21.847332 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:58:21.847347 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:58:21.847364 | orchestrator | 2026-03-29 01:58:21.847381 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 01:58:21.847396 | orchestrator | Sunday 29 March 2026 01:58:20 +0000 (0:00:00.576) 0:00:22.989 ********** 2026-03-29 01:58:21.847413 | orchestrator | ok: [testbed-manager] 2026-03-29 01:58:21.847428 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:58:21.847443 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:58:21.847472 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:58:21.847505 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.302986 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.303188 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.303222 | orchestrator | 2026-03-29 01:59:03.303246 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 01:59:03.303268 | orchestrator | Sunday 29 March 2026 01:58:21 +0000 (0:00:01.251) 0:00:24.240 ********** 2026-03-29 01:59:03.303288 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.303309 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.303328 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.303347 | orchestrator | changed: [testbed-manager] 2026-03-29 01:59:03.303367 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.303387 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.303406 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.303425 | orchestrator | 2026-03-29 01:59:03.303445 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-29 01:59:03.303464 | orchestrator | Sunday 29 March 2026 01:58:38 +0000 (0:00:17.002) 0:00:41.242 ********** 2026-03-29 01:59:03.303484 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.303536 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.303558 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.303612 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.303632 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.303651 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.303671 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.303690 | orchestrator | 2026-03-29 01:59:03.303710 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-29 01:59:03.303730 | orchestrator | Sunday 29 March 2026 01:58:39 +0000 (0:00:00.193) 0:00:41.436 ********** 2026-03-29 01:59:03.303749 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.303766 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.303785 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.303802 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.303819 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.303835 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.303852 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.303868 | orchestrator | 2026-03-29 01:59:03.303884 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-29 01:59:03.303901 | orchestrator | Sunday 29 March 2026 01:58:39 +0000 (0:00:00.170) 0:00:41.607 ********** 2026-03-29 01:59:03.303919 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.303937 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.303955 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.303972 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.303989 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.304008 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.304026 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.304045 | orchestrator | 2026-03-29 01:59:03.304062 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-29 01:59:03.304081 | orchestrator | Sunday 29 March 2026 01:58:39 +0000 (0:00:00.169) 0:00:41.776 ********** 2026-03-29 01:59:03.304103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:59:03.304125 | orchestrator | 2026-03-29 01:59:03.304144 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-29 01:59:03.304162 | orchestrator | Sunday 29 March 2026 01:58:39 +0000 (0:00:00.237) 0:00:42.014 ********** 2026-03-29 01:59:03.304180 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.304198 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.304217 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.304234 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.304252 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.304271 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.304289 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.304307 | orchestrator | 2026-03-29 01:59:03.304325 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-29 01:59:03.304344 | orchestrator | Sunday 29 March 2026 01:58:41 +0000 (0:00:01.935) 0:00:43.949 ********** 2026-03-29 01:59:03.304363 | orchestrator | changed: [testbed-manager] 2026-03-29 01:59:03.304383 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:59:03.304402 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.304414 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:59:03.304424 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:59:03.304435 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.304445 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.304456 | orchestrator | 2026-03-29 01:59:03.304467 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-29 01:59:03.304478 | orchestrator | Sunday 29 March 2026 01:58:42 +0000 (0:00:01.135) 0:00:45.084 ********** 2026-03-29 01:59:03.304489 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.304500 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.304510 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.304521 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.304549 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.304559 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.304604 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.304622 | orchestrator | 2026-03-29 01:59:03.304640 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-29 01:59:03.304660 | orchestrator | Sunday 29 March 2026 01:58:44 +0000 (0:00:01.622) 0:00:46.707 ********** 2026-03-29 01:59:03.304680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:59:03.304700 | orchestrator | 2026-03-29 01:59:03.304731 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-29 01:59:03.304744 | orchestrator | Sunday 29 March 2026 01:58:44 +0000 (0:00:00.287) 0:00:46.994 ********** 2026-03-29 01:59:03.304754 | orchestrator | changed: [testbed-manager] 2026-03-29 01:59:03.304765 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:59:03.304775 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:59:03.304786 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.304797 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:59:03.304808 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.304818 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.304829 | orchestrator | 2026-03-29 01:59:03.304865 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-29 01:59:03.304876 | orchestrator | Sunday 29 March 2026 01:58:45 +0000 (0:00:01.057) 0:00:48.052 ********** 2026-03-29 01:59:03.304887 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:59:03.304898 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:59:03.304907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:59:03.304917 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:59:03.304926 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:59:03.304935 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:59:03.304945 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:59:03.304954 | orchestrator | 2026-03-29 01:59:03.304963 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-29 01:59:03.304973 | orchestrator | Sunday 29 March 2026 01:58:45 +0000 (0:00:00.199) 0:00:48.252 ********** 2026-03-29 01:59:03.304983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:59:03.304993 | orchestrator | 2026-03-29 01:59:03.305002 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-29 01:59:03.305012 | orchestrator | Sunday 29 March 2026 01:58:46 +0000 (0:00:00.318) 0:00:48.570 ********** 2026-03-29 01:59:03.305021 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.305031 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.305040 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.305050 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.305059 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.305068 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.305077 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.305087 | orchestrator | 2026-03-29 01:59:03.305096 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-29 01:59:03.305106 | orchestrator | Sunday 29 March 2026 01:58:48 +0000 (0:00:02.119) 0:00:50.690 ********** 2026-03-29 01:59:03.305115 | orchestrator | changed: [testbed-manager] 2026-03-29 01:59:03.305125 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:59:03.305134 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:59:03.305143 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.305153 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.305162 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:59:03.305171 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.305181 | orchestrator | 2026-03-29 01:59:03.305199 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-29 01:59:03.305209 | orchestrator | Sunday 29 March 2026 01:58:49 +0000 (0:00:01.260) 0:00:51.951 ********** 2026-03-29 01:59:03.305219 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:59:03.305228 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:59:03.305237 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:59:03.305247 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:59:03.305256 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:59:03.305266 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:59:03.305275 | orchestrator | changed: [testbed-manager] 2026-03-29 01:59:03.305284 | orchestrator | 2026-03-29 01:59:03.305294 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-29 01:59:03.305303 | orchestrator | Sunday 29 March 2026 01:59:00 +0000 (0:00:11.077) 0:01:03.029 ********** 2026-03-29 01:59:03.305313 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.305322 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.305332 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.305341 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.305350 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.305360 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.305369 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.305378 | orchestrator | 2026-03-29 01:59:03.305388 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-29 01:59:03.305398 | orchestrator | Sunday 29 March 2026 01:59:01 +0000 (0:00:01.030) 0:01:04.059 ********** 2026-03-29 01:59:03.305407 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.305416 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.305426 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.305435 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.305444 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.305454 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.305463 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.305472 | orchestrator | 2026-03-29 01:59:03.305482 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-29 01:59:03.305491 | orchestrator | Sunday 29 March 2026 01:59:02 +0000 (0:00:00.984) 0:01:05.044 ********** 2026-03-29 01:59:03.305501 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.305510 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.305519 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.305529 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.305538 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.305547 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.305556 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.305624 | orchestrator | 2026-03-29 01:59:03.305637 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-29 01:59:03.305647 | orchestrator | Sunday 29 March 2026 01:59:02 +0000 (0:00:00.225) 0:01:05.269 ********** 2026-03-29 01:59:03.305656 | orchestrator | ok: [testbed-manager] 2026-03-29 01:59:03.305666 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:59:03.305675 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:59:03.305684 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:59:03.305694 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:59:03.305703 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:59:03.305712 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:59:03.305721 | orchestrator | 2026-03-29 01:59:03.305737 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-29 01:59:03.305747 | orchestrator | Sunday 29 March 2026 01:59:03 +0000 (0:00:00.205) 0:01:05.475 ********** 2026-03-29 01:59:03.305757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:59:03.305767 | orchestrator | 2026-03-29 01:59:03.305785 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-29 02:01:58.946095 | orchestrator | Sunday 29 March 2026 01:59:03 +0000 (0:00:00.224) 0:01:05.699 ********** 2026-03-29 02:01:58.946167 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946174 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946178 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946183 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946187 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946190 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946195 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946199 | orchestrator | 2026-03-29 02:01:58.946203 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-29 02:01:58.946208 | orchestrator | Sunday 29 March 2026 01:59:05 +0000 (0:00:02.069) 0:01:07.769 ********** 2026-03-29 02:01:58.946212 | orchestrator | changed: [testbed-manager] 2026-03-29 02:01:58.946217 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:01:58.946221 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:01:58.946224 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:01:58.946228 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:01:58.946232 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:01:58.946236 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:01:58.946239 | orchestrator | 2026-03-29 02:01:58.946243 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-29 02:01:58.946248 | orchestrator | Sunday 29 March 2026 01:59:05 +0000 (0:00:00.533) 0:01:08.302 ********** 2026-03-29 02:01:58.946251 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946255 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946259 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946263 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946266 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946270 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946274 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946277 | orchestrator | 2026-03-29 02:01:58.946281 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-29 02:01:58.946285 | orchestrator | Sunday 29 March 2026 01:59:06 +0000 (0:00:00.177) 0:01:08.479 ********** 2026-03-29 02:01:58.946289 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946293 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946297 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946300 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946304 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946308 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946311 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946315 | orchestrator | 2026-03-29 02:01:58.946319 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-29 02:01:58.946323 | orchestrator | Sunday 29 March 2026 01:59:07 +0000 (0:00:01.449) 0:01:09.928 ********** 2026-03-29 02:01:58.946326 | orchestrator | changed: [testbed-manager] 2026-03-29 02:01:58.946330 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:01:58.946334 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:01:58.946338 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:01:58.946341 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:01:58.946345 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:01:58.946349 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:01:58.946353 | orchestrator | 2026-03-29 02:01:58.946356 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-29 02:01:58.946364 | orchestrator | Sunday 29 March 2026 01:59:09 +0000 (0:00:02.064) 0:01:11.992 ********** 2026-03-29 02:01:58.946368 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946372 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946375 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946379 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946383 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946387 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946390 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946394 | orchestrator | 2026-03-29 02:01:58.946398 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-29 02:01:58.946416 | orchestrator | Sunday 29 March 2026 01:59:12 +0000 (0:00:02.660) 0:01:14.652 ********** 2026-03-29 02:01:58.946420 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946424 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946428 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946431 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946435 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946439 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946442 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946446 | orchestrator | 2026-03-29 02:01:58.946450 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-29 02:01:58.946454 | orchestrator | Sunday 29 March 2026 02:00:24 +0000 (0:01:12.183) 0:02:26.836 ********** 2026-03-29 02:01:58.946457 | orchestrator | changed: [testbed-manager] 2026-03-29 02:01:58.946461 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:01:58.946465 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:01:58.946468 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:01:58.946472 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:01:58.946476 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:01:58.946479 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:01:58.946483 | orchestrator | 2026-03-29 02:01:58.946487 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-29 02:01:58.946491 | orchestrator | Sunday 29 March 2026 02:01:44 +0000 (0:01:20.347) 0:03:47.183 ********** 2026-03-29 02:01:58.946494 | orchestrator | ok: [testbed-manager] 2026-03-29 02:01:58.946498 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946502 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946506 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946509 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946513 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946517 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946521 | orchestrator | 2026-03-29 02:01:58.946524 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-29 02:01:58.946528 | orchestrator | Sunday 29 March 2026 02:01:46 +0000 (0:00:01.915) 0:03:49.099 ********** 2026-03-29 02:01:58.946532 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:01:58.946536 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:01:58.946540 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:01:58.946543 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:01:58.946547 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:01:58.946551 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:01:58.946555 | orchestrator | changed: [testbed-manager] 2026-03-29 02:01:58.946558 | orchestrator | 2026-03-29 02:01:58.946562 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-29 02:01:58.946566 | orchestrator | Sunday 29 March 2026 02:01:57 +0000 (0:00:11.044) 0:04:00.143 ********** 2026-03-29 02:01:58.946588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-29 02:01:58.946635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-29 02:01:58.946642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-29 02:01:58.946652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 02:01:58.946657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 02:01:58.946661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-29 02:01:58.946666 | orchestrator | 2026-03-29 02:01:58.946671 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-29 02:01:58.946675 | orchestrator | Sunday 29 March 2026 02:01:58 +0000 (0:00:00.391) 0:04:00.535 ********** 2026-03-29 02:01:58.946679 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 02:01:58.946684 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 02:01:58.946688 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:01:58.946692 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 02:01:58.946697 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:01:58.946701 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 02:01:58.946705 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:01:58.946710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:01:58.946714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:01:58.946718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:01:58.946723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:01:58.946727 | orchestrator | 2026-03-29 02:01:58.946731 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-29 02:01:58.946736 | orchestrator | Sunday 29 March 2026 02:01:58 +0000 (0:00:00.714) 0:04:01.250 ********** 2026-03-29 02:01:58.946740 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 02:01:58.946748 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 02:01:58.946752 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 02:01:58.946757 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 02:01:58.946761 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 02:01:58.946768 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 02:02:06.895161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 02:02:06.895260 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 02:02:06.895272 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 02:02:06.895299 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 02:02:06.895309 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 02:02:06.895318 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 02:02:06.895325 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 02:02:06.895333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 02:02:06.895341 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 02:02:06.895349 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 02:02:06.895358 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 02:02:06.895366 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 02:02:06.895374 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 02:02:06.895381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 02:02:06.895390 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:06.895405 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 02:02:06.895419 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:02:06.895432 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 02:02:06.895445 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 02:02:06.895457 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 02:02:06.895468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 02:02:06.895481 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 02:02:06.895494 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 02:02:06.895508 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 02:02:06.895521 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 02:02:06.895534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 02:02:06.895548 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 02:02:06.895562 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 02:02:06.895576 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 02:02:06.895589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 02:02:06.895659 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 02:02:06.895672 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 02:02:06.895686 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 02:02:06.895700 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:02:06.895714 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 02:02:06.895727 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 02:02:06.895752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 02:02:06.895767 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:02:06.895796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 02:02:06.895811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 02:02:06.895825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 02:02:06.895838 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 02:02:06.895851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 02:02:06.895885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 02:02:06.895899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 02:02:06.895913 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 02:02:06.895926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 02:02:06.895940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 02:02:06.895953 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 02:02:06.895966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 02:02:06.895980 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 02:02:06.895993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 02:02:06.896007 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 02:02:06.896020 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 02:02:06.896033 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 02:02:06.896046 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 02:02:06.896060 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 02:02:06.896073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 02:02:06.896087 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 02:02:06.896101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 02:02:06.896114 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 02:02:06.896127 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 02:02:06.896139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 02:02:06.896153 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 02:02:06.896166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 02:02:06.896179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 02:02:06.896193 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 02:02:06.896205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 02:02:06.896219 | orchestrator | 2026-03-29 02:02:06.896233 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-29 02:02:06.896255 | orchestrator | Sunday 29 March 2026 02:02:04 +0000 (0:00:05.899) 0:04:07.149 ********** 2026-03-29 02:02:06.896268 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896281 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896294 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896332 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 02:02:06.896358 | orchestrator | 2026-03-29 02:02:06.896371 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-29 02:02:06.896385 | orchestrator | Sunday 29 March 2026 02:02:06 +0000 (0:00:01.637) 0:04:08.787 ********** 2026-03-29 02:02:06.896398 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:06.896411 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:06.896424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:06.896444 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:06.896457 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:02:06.896469 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:02:06.896480 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:06.896493 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:02:06.896505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:06.896518 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:06.896542 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:21.264932 | orchestrator | 2026-03-29 02:02:21.265017 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-29 02:02:21.265026 | orchestrator | Sunday 29 March 2026 02:02:06 +0000 (0:00:00.504) 0:04:09.292 ********** 2026-03-29 02:02:21.265032 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:21.265045 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:21.265056 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:21.265065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:21.265075 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:02:21.265084 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:02:21.265093 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 02:02:21.265102 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:02:21.265112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:21.265122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:21.265132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 02:02:21.265142 | orchestrator | 2026-03-29 02:02:21.265152 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-29 02:02:21.265182 | orchestrator | Sunday 29 March 2026 02:02:08 +0000 (0:00:01.582) 0:04:10.874 ********** 2026-03-29 02:02:21.265193 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 02:02:21.265203 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:21.265212 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 02:02:21.265221 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:02:21.265231 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 02:02:21.265243 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:02:21.265252 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 02:02:21.265261 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:02:21.265270 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 02:02:21.265279 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 02:02:21.265288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 02:02:21.265298 | orchestrator | 2026-03-29 02:02:21.265307 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-29 02:02:21.265316 | orchestrator | Sunday 29 March 2026 02:02:09 +0000 (0:00:00.667) 0:04:11.541 ********** 2026-03-29 02:02:21.265325 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:21.265334 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:02:21.265343 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:02:21.265352 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:02:21.265361 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:02:21.265370 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:02:21.265379 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:02:21.265388 | orchestrator | 2026-03-29 02:02:21.265397 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-29 02:02:21.265406 | orchestrator | Sunday 29 March 2026 02:02:09 +0000 (0:00:00.343) 0:04:11.885 ********** 2026-03-29 02:02:21.265415 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:02:21.265424 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:02:21.265434 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:02:21.265443 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:02:21.265451 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:02:21.265461 | orchestrator | ok: [testbed-manager] 2026-03-29 02:02:21.265470 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:02:21.265478 | orchestrator | 2026-03-29 02:02:21.265488 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-29 02:02:21.265497 | orchestrator | Sunday 29 March 2026 02:02:14 +0000 (0:00:05.434) 0:04:17.320 ********** 2026-03-29 02:02:21.265506 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-29 02:02:21.265516 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-29 02:02:21.265525 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:21.265534 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:02:21.265543 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-29 02:02:21.265552 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:02:21.265561 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-29 02:02:21.265570 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-29 02:02:21.265579 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:02:21.265631 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-29 02:02:21.265643 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:02:21.265652 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:02:21.265661 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-29 02:02:21.265670 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:02:21.265679 | orchestrator | 2026-03-29 02:02:21.265689 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-29 02:02:21.265705 | orchestrator | Sunday 29 March 2026 02:02:15 +0000 (0:00:00.331) 0:04:17.652 ********** 2026-03-29 02:02:21.265715 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-29 02:02:21.265724 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-29 02:02:21.265733 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-29 02:02:21.265760 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-29 02:02:21.265770 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-29 02:02:21.265779 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-29 02:02:21.265788 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-29 02:02:21.265797 | orchestrator | 2026-03-29 02:02:21.265807 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-29 02:02:21.265816 | orchestrator | Sunday 29 March 2026 02:02:16 +0000 (0:00:01.158) 0:04:18.810 ********** 2026-03-29 02:02:21.265826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:02:21.265838 | orchestrator | 2026-03-29 02:02:21.265847 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-29 02:02:21.265855 | orchestrator | Sunday 29 March 2026 02:02:16 +0000 (0:00:00.565) 0:04:19.376 ********** 2026-03-29 02:02:21.265864 | orchestrator | ok: [testbed-manager] 2026-03-29 02:02:21.265874 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:02:21.265883 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:02:21.265893 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:02:21.265902 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:02:21.265912 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:02:21.265920 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:02:21.265931 | orchestrator | 2026-03-29 02:02:21.265940 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-29 02:02:21.265950 | orchestrator | Sunday 29 March 2026 02:02:18 +0000 (0:00:01.308) 0:04:20.684 ********** 2026-03-29 02:02:21.265957 | orchestrator | ok: [testbed-manager] 2026-03-29 02:02:21.265963 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:02:21.265968 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:02:21.265974 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:02:21.265979 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:02:21.265984 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:02:21.265989 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:02:21.265995 | orchestrator | 2026-03-29 02:02:21.266000 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-29 02:02:21.266006 | orchestrator | Sunday 29 March 2026 02:02:18 +0000 (0:00:00.698) 0:04:21.383 ********** 2026-03-29 02:02:21.266011 | orchestrator | changed: [testbed-manager] 2026-03-29 02:02:21.266058 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:02:21.266064 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:02:21.266070 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:02:21.266076 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:02:21.266081 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:02:21.266087 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:02:21.266095 | orchestrator | 2026-03-29 02:02:21.266104 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-29 02:02:21.266111 | orchestrator | Sunday 29 March 2026 02:02:19 +0000 (0:00:00.635) 0:04:22.018 ********** 2026-03-29 02:02:21.266116 | orchestrator | ok: [testbed-manager] 2026-03-29 02:02:21.266121 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:02:21.266127 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:02:21.266132 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:02:21.266137 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:02:21.266143 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:02:21.266148 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:02:21.266163 | orchestrator | 2026-03-29 02:02:21.266168 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-29 02:02:21.266187 | orchestrator | Sunday 29 March 2026 02:02:20 +0000 (0:00:00.618) 0:04:22.637 ********** 2026-03-29 02:02:21.266196 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748371.9745727, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:21.266204 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748398.3756683, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:21.266215 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748411.6176841, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:21.266238 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748412.4537423, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467190 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748414.0134764, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467333 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748411.7169018, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467360 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774748403.3325024, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467412 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467427 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467453 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467465 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467499 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467512 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467524 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 02:02:26.467544 | orchestrator | 2026-03-29 02:02:26.467558 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-29 02:02:26.467570 | orchestrator | Sunday 29 March 2026 02:02:21 +0000 (0:00:01.020) 0:04:23.658 ********** 2026-03-29 02:02:26.467582 | orchestrator | changed: [testbed-manager] 2026-03-29 02:02:26.467594 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:02:26.467632 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:02:26.467643 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:02:26.467654 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:02:26.467666 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:02:26.467676 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:02:26.467690 | orchestrator | 2026-03-29 02:02:26.467704 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-29 02:02:26.467717 | orchestrator | Sunday 29 March 2026 02:02:22 +0000 (0:00:01.146) 0:04:24.804 ********** 2026-03-29 02:02:26.467729 | orchestrator | changed: [testbed-manager] 2026-03-29 02:02:26.467742 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:02:26.467754 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:02:26.467767 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:02:26.467779 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:02:26.467791 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:02:26.467803 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:02:26.467817 | orchestrator | 2026-03-29 02:02:26.467830 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-29 02:02:26.467843 | orchestrator | Sunday 29 March 2026 02:02:23 +0000 (0:00:01.167) 0:04:25.971 ********** 2026-03-29 02:02:26.467855 | orchestrator | changed: [testbed-manager] 2026-03-29 02:02:26.467868 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:02:26.467881 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:02:26.467893 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:02:26.467906 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:02:26.467919 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:02:26.467932 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:02:26.467944 | orchestrator | 2026-03-29 02:02:26.467957 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-29 02:02:26.467970 | orchestrator | Sunday 29 March 2026 02:02:24 +0000 (0:00:01.183) 0:04:27.155 ********** 2026-03-29 02:02:26.467982 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:02:26.467994 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:02:26.468008 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:02:26.468026 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:02:26.468039 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:02:26.468050 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:02:26.468061 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:02:26.468071 | orchestrator | 2026-03-29 02:02:26.468088 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-29 02:02:26.468108 | orchestrator | Sunday 29 March 2026 02:02:25 +0000 (0:00:00.329) 0:04:27.484 ********** 2026-03-29 02:02:26.468126 | orchestrator | ok: [testbed-manager] 2026-03-29 02:02:26.468146 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:02:26.468166 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:02:26.468187 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:02:26.468208 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:02:26.468225 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:02:26.468241 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:02:26.468252 | orchestrator | 2026-03-29 02:02:26.468262 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-29 02:02:26.468273 | orchestrator | Sunday 29 March 2026 02:02:25 +0000 (0:00:00.902) 0:04:28.387 ********** 2026-03-29 02:02:26.468286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:02:26.468308 | orchestrator | 2026-03-29 02:02:26.468319 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-29 02:02:26.468339 | orchestrator | Sunday 29 March 2026 02:02:26 +0000 (0:00:00.478) 0:04:28.866 ********** 2026-03-29 02:03:45.370086 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370195 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:45.370211 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:45.370223 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:45.370234 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:45.370245 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:45.370256 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:45.370267 | orchestrator | 2026-03-29 02:03:45.370279 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-29 02:03:45.370291 | orchestrator | Sunday 29 March 2026 02:02:35 +0000 (0:00:09.115) 0:04:37.981 ********** 2026-03-29 02:03:45.370302 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370313 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370324 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370335 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370345 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.370356 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.370366 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370377 | orchestrator | 2026-03-29 02:03:45.370388 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-29 02:03:45.370399 | orchestrator | Sunday 29 March 2026 02:02:37 +0000 (0:00:01.435) 0:04:39.417 ********** 2026-03-29 02:03:45.370410 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370421 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370432 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370442 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.370453 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370463 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370474 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.370485 | orchestrator | 2026-03-29 02:03:45.370495 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-29 02:03:45.370506 | orchestrator | Sunday 29 March 2026 02:02:38 +0000 (0:00:01.270) 0:04:40.688 ********** 2026-03-29 02:03:45.370517 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370528 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370538 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370549 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.370560 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370574 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370587 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.370600 | orchestrator | 2026-03-29 02:03:45.370641 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-29 02:03:45.370655 | orchestrator | Sunday 29 March 2026 02:02:38 +0000 (0:00:00.345) 0:04:41.033 ********** 2026-03-29 02:03:45.370668 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370681 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370693 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370706 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.370718 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370731 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370743 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.370755 | orchestrator | 2026-03-29 02:03:45.370767 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-29 02:03:45.370780 | orchestrator | Sunday 29 March 2026 02:02:38 +0000 (0:00:00.335) 0:04:41.368 ********** 2026-03-29 02:03:45.370792 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370805 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370818 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370831 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.370865 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370878 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370891 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.370903 | orchestrator | 2026-03-29 02:03:45.370915 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-29 02:03:45.370928 | orchestrator | Sunday 29 March 2026 02:02:39 +0000 (0:00:00.332) 0:04:41.701 ********** 2026-03-29 02:03:45.370941 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.370952 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.370963 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.370974 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.370984 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.370995 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.371005 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.371016 | orchestrator | 2026-03-29 02:03:45.371026 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-29 02:03:45.371037 | orchestrator | Sunday 29 March 2026 02:02:44 +0000 (0:00:04.847) 0:04:46.549 ********** 2026-03-29 02:03:45.371050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:03:45.371063 | orchestrator | 2026-03-29 02:03:45.371074 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-29 02:03:45.371085 | orchestrator | Sunday 29 March 2026 02:02:44 +0000 (0:00:00.436) 0:04:46.985 ********** 2026-03-29 02:03:45.371096 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371107 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-29 02:03:45.371118 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371129 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:45.371140 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-29 02:03:45.371165 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371176 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:45.371187 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-29 02:03:45.371198 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371208 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:45.371219 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-29 02:03:45.371230 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371241 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-29 02:03:45.371251 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:45.371262 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371273 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-29 02:03:45.371302 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:45.371314 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:45.371324 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-29 02:03:45.371335 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-29 02:03:45.371346 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:45.371356 | orchestrator | 2026-03-29 02:03:45.371367 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-29 02:03:45.371378 | orchestrator | Sunday 29 March 2026 02:02:44 +0000 (0:00:00.374) 0:04:47.360 ********** 2026-03-29 02:03:45.371389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:03:45.371400 | orchestrator | 2026-03-29 02:03:45.371411 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-29 02:03:45.371422 | orchestrator | Sunday 29 March 2026 02:02:45 +0000 (0:00:00.398) 0:04:47.758 ********** 2026-03-29 02:03:45.371441 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-29 02:03:45.371452 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-29 02:03:45.371462 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:45.371473 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-29 02:03:45.371484 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:45.371494 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-29 02:03:45.371505 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:45.371516 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-29 02:03:45.371526 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:45.371537 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-29 02:03:45.371548 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:45.371558 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:45.371569 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-29 02:03:45.371580 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:45.371590 | orchestrator | 2026-03-29 02:03:45.371601 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-29 02:03:45.371639 | orchestrator | Sunday 29 March 2026 02:02:45 +0000 (0:00:00.308) 0:04:48.067 ********** 2026-03-29 02:03:45.371658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:03:45.371678 | orchestrator | 2026-03-29 02:03:45.371697 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-29 02:03:45.371716 | orchestrator | Sunday 29 March 2026 02:02:46 +0000 (0:00:00.421) 0:04:48.488 ********** 2026-03-29 02:03:45.371728 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:45.371738 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:45.371749 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:45.371759 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:45.371770 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:45.371781 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:45.371791 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:45.371802 | orchestrator | 2026-03-29 02:03:45.371813 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-29 02:03:45.371823 | orchestrator | Sunday 29 March 2026 02:03:19 +0000 (0:00:33.706) 0:05:22.195 ********** 2026-03-29 02:03:45.371834 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:45.371845 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:45.371855 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:45.371866 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:45.371876 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:45.371887 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:45.371897 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:45.371925 | orchestrator | 2026-03-29 02:03:45.371948 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-29 02:03:45.371959 | orchestrator | Sunday 29 March 2026 02:03:29 +0000 (0:00:09.229) 0:05:31.425 ********** 2026-03-29 02:03:45.371976 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:45.371987 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:45.371998 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:45.372009 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:45.372019 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:45.372030 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:45.372040 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:45.372051 | orchestrator | 2026-03-29 02:03:45.372061 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-29 02:03:45.372072 | orchestrator | Sunday 29 March 2026 02:03:37 +0000 (0:00:08.611) 0:05:40.037 ********** 2026-03-29 02:03:45.372091 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:45.372102 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:45.372112 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:45.372123 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:45.372133 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:45.372144 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:45.372155 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:45.372165 | orchestrator | 2026-03-29 02:03:45.372176 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-29 02:03:45.372187 | orchestrator | Sunday 29 March 2026 02:03:39 +0000 (0:00:01.573) 0:05:41.611 ********** 2026-03-29 02:03:45.372197 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:45.372208 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:45.372218 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:45.372229 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:45.372239 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:45.372250 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:45.372261 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:45.372271 | orchestrator | 2026-03-29 02:03:45.372290 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-29 02:03:56.898750 | orchestrator | Sunday 29 March 2026 02:03:45 +0000 (0:00:06.149) 0:05:47.760 ********** 2026-03-29 02:03:56.898861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:03:56.898878 | orchestrator | 2026-03-29 02:03:56.898891 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-29 02:03:56.898902 | orchestrator | Sunday 29 March 2026 02:03:45 +0000 (0:00:00.543) 0:05:48.304 ********** 2026-03-29 02:03:56.898913 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:56.898925 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:56.898936 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:56.898946 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:56.898957 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:56.898967 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:56.898978 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:56.898989 | orchestrator | 2026-03-29 02:03:56.899018 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-29 02:03:56.899029 | orchestrator | Sunday 29 March 2026 02:03:46 +0000 (0:00:00.733) 0:05:49.037 ********** 2026-03-29 02:03:56.899052 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:56.899064 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:56.899075 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:56.899086 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:56.899096 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:56.899107 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:56.899118 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:56.899128 | orchestrator | 2026-03-29 02:03:56.899139 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-29 02:03:56.899153 | orchestrator | Sunday 29 March 2026 02:03:48 +0000 (0:00:01.963) 0:05:51.001 ********** 2026-03-29 02:03:56.899172 | orchestrator | changed: [testbed-manager] 2026-03-29 02:03:56.899190 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:03:56.899208 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:03:56.899225 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:03:56.899242 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:03:56.899259 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:03:56.899279 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:03:56.899298 | orchestrator | 2026-03-29 02:03:56.899316 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-29 02:03:56.899337 | orchestrator | Sunday 29 March 2026 02:03:49 +0000 (0:00:00.827) 0:05:51.828 ********** 2026-03-29 02:03:56.899388 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.899409 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.899428 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.899445 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:56.899463 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:56.899483 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:56.899502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:56.899522 | orchestrator | 2026-03-29 02:03:56.899542 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-29 02:03:56.899561 | orchestrator | Sunday 29 March 2026 02:03:49 +0000 (0:00:00.266) 0:05:52.095 ********** 2026-03-29 02:03:56.899576 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.899589 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.899602 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.899672 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:56.899684 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:56.899695 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:56.899706 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:56.899716 | orchestrator | 2026-03-29 02:03:56.899727 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-29 02:03:56.899738 | orchestrator | Sunday 29 March 2026 02:03:50 +0000 (0:00:00.356) 0:05:52.451 ********** 2026-03-29 02:03:56.899749 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:56.899760 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:56.899770 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:56.899781 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:56.899792 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:56.899803 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:56.899813 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:56.899824 | orchestrator | 2026-03-29 02:03:56.899835 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-29 02:03:56.899861 | orchestrator | Sunday 29 March 2026 02:03:50 +0000 (0:00:00.275) 0:05:52.727 ********** 2026-03-29 02:03:56.899873 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.899883 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.899894 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.899905 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:56.899916 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:56.899927 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:56.899937 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:56.899948 | orchestrator | 2026-03-29 02:03:56.899959 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-29 02:03:56.899971 | orchestrator | Sunday 29 March 2026 02:03:50 +0000 (0:00:00.254) 0:05:52.982 ********** 2026-03-29 02:03:56.899981 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:56.899992 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:56.900003 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:56.900014 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:56.900025 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:56.900035 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:56.900046 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:56.900057 | orchestrator | 2026-03-29 02:03:56.900069 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-29 02:03:56.900080 | orchestrator | Sunday 29 March 2026 02:03:50 +0000 (0:00:00.313) 0:05:53.295 ********** 2026-03-29 02:03:56.900091 | orchestrator | ok: [testbed-manager] =>  2026-03-29 02:03:56.900102 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900113 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 02:03:56.900123 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900134 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 02:03:56.900145 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900155 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 02:03:56.900166 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900197 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 02:03:56.900225 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900243 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 02:03:56.900262 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900291 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 02:03:56.900311 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 02:03:56.900330 | orchestrator | 2026-03-29 02:03:56.900348 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-29 02:03:56.900363 | orchestrator | Sunday 29 March 2026 02:03:51 +0000 (0:00:00.273) 0:05:53.569 ********** 2026-03-29 02:03:56.900382 | orchestrator | ok: [testbed-manager] =>  2026-03-29 02:03:56.900399 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900419 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 02:03:56.900438 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900456 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 02:03:56.900476 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900495 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 02:03:56.900515 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900534 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 02:03:56.900564 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900583 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 02:03:56.900601 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900645 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 02:03:56.900665 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 02:03:56.900684 | orchestrator | 2026-03-29 02:03:56.900704 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-29 02:03:56.900722 | orchestrator | Sunday 29 March 2026 02:03:51 +0000 (0:00:00.309) 0:05:53.879 ********** 2026-03-29 02:03:56.900740 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.900758 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.900776 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.900794 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:56.900811 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:56.900829 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:56.900849 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:56.900868 | orchestrator | 2026-03-29 02:03:56.900887 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-29 02:03:56.900904 | orchestrator | Sunday 29 March 2026 02:03:51 +0000 (0:00:00.271) 0:05:54.151 ********** 2026-03-29 02:03:56.900922 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.900933 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.900944 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.900954 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:03:56.900965 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:03:56.900976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:03:56.900986 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:03:56.900997 | orchestrator | 2026-03-29 02:03:56.901008 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-29 02:03:56.901019 | orchestrator | Sunday 29 March 2026 02:03:52 +0000 (0:00:00.272) 0:05:54.423 ********** 2026-03-29 02:03:56.901032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:03:56.901045 | orchestrator | 2026-03-29 02:03:56.901056 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-29 02:03:56.901068 | orchestrator | Sunday 29 March 2026 02:03:52 +0000 (0:00:00.419) 0:05:54.842 ********** 2026-03-29 02:03:56.901078 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:56.901089 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:56.901100 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:56.901111 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:56.901122 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:56.901132 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:56.901155 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:56.901166 | orchestrator | 2026-03-29 02:03:56.901177 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-29 02:03:56.901187 | orchestrator | Sunday 29 March 2026 02:03:53 +0000 (0:00:01.024) 0:05:55.867 ********** 2026-03-29 02:03:56.901198 | orchestrator | ok: [testbed-manager] 2026-03-29 02:03:56.901209 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:03:56.901219 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:03:56.901230 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:03:56.901241 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:03:56.901251 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:03:56.901270 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:03:56.901281 | orchestrator | 2026-03-29 02:03:56.901292 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-29 02:03:56.901304 | orchestrator | Sunday 29 March 2026 02:03:56 +0000 (0:00:03.002) 0:05:58.869 ********** 2026-03-29 02:03:56.901315 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-29 02:03:56.901326 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-29 02:03:56.901337 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-29 02:03:56.901348 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-29 02:03:56.901359 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-29 02:03:56.901369 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-29 02:03:56.901380 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:03:56.901391 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-29 02:03:56.901402 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-29 02:03:56.901413 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:03:56.901423 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-29 02:03:56.901434 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-29 02:03:56.901444 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-29 02:03:56.901455 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-29 02:03:56.901466 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:03:56.901477 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-29 02:03:56.901500 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-29 02:05:03.981329 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-29 02:05:03.981409 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:03.981417 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-29 02:05:03.981422 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-29 02:05:03.981427 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-29 02:05:03.981431 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:03.981435 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:03.981440 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-29 02:05:03.981444 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-29 02:05:03.981448 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-29 02:05:03.981452 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:03.981457 | orchestrator | 2026-03-29 02:05:03.981462 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-29 02:05:03.981468 | orchestrator | Sunday 29 March 2026 02:03:57 +0000 (0:00:00.620) 0:05:59.489 ********** 2026-03-29 02:05:03.981472 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981476 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981480 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981484 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981488 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981492 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981497 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981516 | orchestrator | 2026-03-29 02:05:03.981521 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-29 02:05:03.981525 | orchestrator | Sunday 29 March 2026 02:04:05 +0000 (0:00:07.946) 0:06:07.436 ********** 2026-03-29 02:05:03.981529 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981533 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981537 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981540 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981544 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981548 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981552 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981556 | orchestrator | 2026-03-29 02:05:03.981560 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-29 02:05:03.981564 | orchestrator | Sunday 29 March 2026 02:04:06 +0000 (0:00:01.107) 0:06:08.543 ********** 2026-03-29 02:05:03.981568 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981572 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981576 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981580 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981584 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981588 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981592 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981602 | orchestrator | 2026-03-29 02:05:03.981606 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-29 02:05:03.981610 | orchestrator | Sunday 29 March 2026 02:04:15 +0000 (0:00:09.099) 0:06:17.643 ********** 2026-03-29 02:05:03.981642 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:03.981647 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981651 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981655 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981659 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981663 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981667 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981671 | orchestrator | 2026-03-29 02:05:03.981675 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-29 02:05:03.981679 | orchestrator | Sunday 29 March 2026 02:04:18 +0000 (0:00:03.425) 0:06:21.069 ********** 2026-03-29 02:05:03.981684 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981688 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981692 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981696 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981699 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981703 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981707 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981711 | orchestrator | 2026-03-29 02:05:03.981716 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-29 02:05:03.981720 | orchestrator | Sunday 29 March 2026 02:04:20 +0000 (0:00:01.380) 0:06:22.450 ********** 2026-03-29 02:05:03.981723 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981727 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981732 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981736 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981740 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981744 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981748 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981752 | orchestrator | 2026-03-29 02:05:03.981756 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-29 02:05:03.981760 | orchestrator | Sunday 29 March 2026 02:04:21 +0000 (0:00:01.635) 0:06:24.085 ********** 2026-03-29 02:05:03.981764 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:03.981769 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:03.981772 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:03.981776 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:03.981780 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:03.981788 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:03.981792 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:03.981796 | orchestrator | 2026-03-29 02:05:03.981800 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-29 02:05:03.981804 | orchestrator | Sunday 29 March 2026 02:04:22 +0000 (0:00:00.644) 0:06:24.729 ********** 2026-03-29 02:05:03.981808 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981812 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981816 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981820 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981824 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981828 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981832 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981836 | orchestrator | 2026-03-29 02:05:03.981840 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-29 02:05:03.981855 | orchestrator | Sunday 29 March 2026 02:04:33 +0000 (0:00:10.780) 0:06:35.510 ********** 2026-03-29 02:05:03.981859 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:03.981863 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981867 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981871 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981874 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981878 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981882 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981886 | orchestrator | 2026-03-29 02:05:03.981890 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-29 02:05:03.981894 | orchestrator | Sunday 29 March 2026 02:04:34 +0000 (0:00:00.919) 0:06:36.429 ********** 2026-03-29 02:05:03.981898 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981902 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981906 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981910 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981914 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981918 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981922 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981926 | orchestrator | 2026-03-29 02:05:03.981930 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-29 02:05:03.981934 | orchestrator | Sunday 29 March 2026 02:04:44 +0000 (0:00:10.205) 0:06:46.635 ********** 2026-03-29 02:05:03.981938 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.981941 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.981945 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.981949 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.981953 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.981957 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.981961 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.981965 | orchestrator | 2026-03-29 02:05:03.981969 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-29 02:05:03.981973 | orchestrator | Sunday 29 March 2026 02:04:56 +0000 (0:00:12.453) 0:06:59.089 ********** 2026-03-29 02:05:03.981977 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-29 02:05:03.981981 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-29 02:05:03.981985 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-29 02:05:03.981989 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-29 02:05:03.981993 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-29 02:05:03.981997 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-29 02:05:03.982001 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-29 02:05:03.982005 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-29 02:05:03.982009 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-29 02:05:03.982047 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-29 02:05:03.982056 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-29 02:05:03.982093 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-29 02:05:03.982097 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-29 02:05:03.982101 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-29 02:05:03.982105 | orchestrator | 2026-03-29 02:05:03.982109 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-29 02:05:03.982113 | orchestrator | Sunday 29 March 2026 02:04:57 +0000 (0:00:01.182) 0:07:00.271 ********** 2026-03-29 02:05:03.982117 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:03.982121 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:03.982125 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:03.982129 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:03.982133 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:03.982137 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:03.982141 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:03.982145 | orchestrator | 2026-03-29 02:05:03.982149 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-29 02:05:03.982153 | orchestrator | Sunday 29 March 2026 02:04:58 +0000 (0:00:00.600) 0:07:00.872 ********** 2026-03-29 02:05:03.982157 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:03.982161 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:03.982165 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:03.982169 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:03.982173 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:03.982176 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:03.982180 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:03.982184 | orchestrator | 2026-03-29 02:05:03.982191 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-29 02:05:03.982196 | orchestrator | Sunday 29 March 2026 02:05:02 +0000 (0:00:04.419) 0:07:05.291 ********** 2026-03-29 02:05:03.982200 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:03.982204 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:03.982208 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:03.982212 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:03.982216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:03.982220 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:03.982224 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:03.982228 | orchestrator | 2026-03-29 02:05:03.982233 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-29 02:05:03.982237 | orchestrator | Sunday 29 March 2026 02:05:03 +0000 (0:00:00.535) 0:07:05.827 ********** 2026-03-29 02:05:03.982241 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-29 02:05:03.982246 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-29 02:05:03.982250 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:03.982254 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-29 02:05:03.982258 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-29 02:05:03.982262 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:03.982266 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-29 02:05:03.982270 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-29 02:05:03.982274 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:03.982280 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-29 02:05:23.675494 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-29 02:05:23.675673 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:23.675685 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-29 02:05:23.675690 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-29 02:05:23.675696 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:23.675722 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-29 02:05:23.675728 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-29 02:05:23.675733 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:23.675738 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-29 02:05:23.675743 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-29 02:05:23.675747 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:23.675752 | orchestrator | 2026-03-29 02:05:23.675759 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-29 02:05:23.675765 | orchestrator | Sunday 29 March 2026 02:05:04 +0000 (0:00:00.807) 0:07:06.635 ********** 2026-03-29 02:05:23.675770 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:23.675775 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:23.675779 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:23.675784 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:23.675788 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:23.675793 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:23.675798 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:23.675802 | orchestrator | 2026-03-29 02:05:23.675807 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-29 02:05:23.675813 | orchestrator | Sunday 29 March 2026 02:05:04 +0000 (0:00:00.523) 0:07:07.159 ********** 2026-03-29 02:05:23.675817 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:23.675822 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:23.675826 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:23.675831 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:23.675835 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:23.675840 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:23.675844 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:23.675849 | orchestrator | 2026-03-29 02:05:23.675853 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-29 02:05:23.675858 | orchestrator | Sunday 29 March 2026 02:05:05 +0000 (0:00:00.515) 0:07:07.675 ********** 2026-03-29 02:05:23.675863 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:23.675867 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:23.675872 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:23.675876 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:23.675881 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:23.675885 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:23.675890 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:23.675894 | orchestrator | 2026-03-29 02:05:23.675899 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-29 02:05:23.675903 | orchestrator | Sunday 29 March 2026 02:05:05 +0000 (0:00:00.533) 0:07:08.209 ********** 2026-03-29 02:05:23.675908 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.675913 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.675917 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.675922 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.675927 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.675931 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.675936 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.675940 | orchestrator | 2026-03-29 02:05:23.675945 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-29 02:05:23.675950 | orchestrator | Sunday 29 March 2026 02:05:07 +0000 (0:00:01.986) 0:07:10.195 ********** 2026-03-29 02:05:23.675956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:05:23.675963 | orchestrator | 2026-03-29 02:05:23.675968 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-29 02:05:23.675972 | orchestrator | Sunday 29 March 2026 02:05:08 +0000 (0:00:00.869) 0:07:11.065 ********** 2026-03-29 02:05:23.675987 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.675993 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:23.675997 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:23.676002 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:23.676007 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:23.676011 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:23.676016 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:23.676021 | orchestrator | 2026-03-29 02:05:23.676027 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-29 02:05:23.676032 | orchestrator | Sunday 29 March 2026 02:05:09 +0000 (0:00:00.800) 0:07:11.865 ********** 2026-03-29 02:05:23.676037 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676043 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:23.676048 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:23.676053 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:23.676058 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:23.676064 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:23.676069 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:23.676074 | orchestrator | 2026-03-29 02:05:23.676080 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-29 02:05:23.676085 | orchestrator | Sunday 29 March 2026 02:05:10 +0000 (0:00:00.802) 0:07:12.668 ********** 2026-03-29 02:05:23.676091 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676096 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:23.676101 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:23.676106 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:23.676112 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:23.676117 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:23.676122 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:23.676127 | orchestrator | 2026-03-29 02:05:23.676133 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-29 02:05:23.676153 | orchestrator | Sunday 29 March 2026 02:05:11 +0000 (0:00:01.457) 0:07:14.125 ********** 2026-03-29 02:05:23.676159 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:23.676164 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.676169 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.676175 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.676180 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.676185 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.676191 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.676197 | orchestrator | 2026-03-29 02:05:23.676202 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-29 02:05:23.676207 | orchestrator | Sunday 29 March 2026 02:05:13 +0000 (0:00:01.629) 0:07:15.755 ********** 2026-03-29 02:05:23.676213 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676218 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:23.676224 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:23.676229 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:23.676234 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:23.676239 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:23.676245 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:23.676250 | orchestrator | 2026-03-29 02:05:23.676256 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-29 02:05:23.676261 | orchestrator | Sunday 29 March 2026 02:05:14 +0000 (0:00:01.447) 0:07:17.203 ********** 2026-03-29 02:05:23.676266 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:23.676271 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:23.676276 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:23.676282 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:23.676287 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:23.676292 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:23.676297 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:23.676302 | orchestrator | 2026-03-29 02:05:23.676308 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-29 02:05:23.676319 | orchestrator | Sunday 29 March 2026 02:05:16 +0000 (0:00:01.479) 0:07:18.683 ********** 2026-03-29 02:05:23.676324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:05:23.676329 | orchestrator | 2026-03-29 02:05:23.676335 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-29 02:05:23.676340 | orchestrator | Sunday 29 March 2026 02:05:17 +0000 (0:00:01.026) 0:07:19.709 ********** 2026-03-29 02:05:23.676345 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676350 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.676356 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.676361 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.676367 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.676372 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.676377 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.676382 | orchestrator | 2026-03-29 02:05:23.676387 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-29 02:05:23.676392 | orchestrator | Sunday 29 March 2026 02:05:18 +0000 (0:00:01.483) 0:07:21.193 ********** 2026-03-29 02:05:23.676397 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676401 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.676406 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.676410 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.676415 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.676419 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.676424 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.676428 | orchestrator | 2026-03-29 02:05:23.676433 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-29 02:05:23.676438 | orchestrator | Sunday 29 March 2026 02:05:19 +0000 (0:00:01.156) 0:07:22.349 ********** 2026-03-29 02:05:23.676442 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676447 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.676451 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.676456 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.676460 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.676465 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.676469 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.676474 | orchestrator | 2026-03-29 02:05:23.676479 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-29 02:05:23.676483 | orchestrator | Sunday 29 March 2026 02:05:21 +0000 (0:00:01.140) 0:07:23.489 ********** 2026-03-29 02:05:23.676488 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:23.676492 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:23.676510 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:23.676515 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:23.676520 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:23.676524 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:23.676529 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:23.676533 | orchestrator | 2026-03-29 02:05:23.676538 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-29 02:05:23.676543 | orchestrator | Sunday 29 March 2026 02:05:22 +0000 (0:00:01.381) 0:07:24.870 ********** 2026-03-29 02:05:23.676547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:05:23.676552 | orchestrator | 2026-03-29 02:05:23.676556 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:23.676561 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.897) 0:07:25.768 ********** 2026-03-29 02:05:23.676566 | orchestrator | 2026-03-29 02:05:23.676570 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:23.676575 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.039) 0:07:25.807 ********** 2026-03-29 02:05:23.676584 | orchestrator | 2026-03-29 02:05:23.676588 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:23.676593 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.045) 0:07:25.852 ********** 2026-03-29 02:05:23.676598 | orchestrator | 2026-03-29 02:05:23.676602 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:23.676609 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.038) 0:07:25.891 ********** 2026-03-29 02:05:51.918367 | orchestrator | 2026-03-29 02:05:51.918511 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:51.918537 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.049) 0:07:25.940 ********** 2026-03-29 02:05:51.918556 | orchestrator | 2026-03-29 02:05:51.918577 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:51.918596 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.043) 0:07:25.984 ********** 2026-03-29 02:05:51.918614 | orchestrator | 2026-03-29 02:05:51.918663 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 02:05:51.918681 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.037) 0:07:26.021 ********** 2026-03-29 02:05:51.918700 | orchestrator | 2026-03-29 02:05:51.918719 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 02:05:51.918738 | orchestrator | Sunday 29 March 2026 02:05:23 +0000 (0:00:00.037) 0:07:26.059 ********** 2026-03-29 02:05:51.918757 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:51.918777 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:51.918795 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:51.918813 | orchestrator | 2026-03-29 02:05:51.918832 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-29 02:05:51.918852 | orchestrator | Sunday 29 March 2026 02:05:24 +0000 (0:00:01.272) 0:07:27.331 ********** 2026-03-29 02:05:51.918871 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:51.918891 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:51.918909 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:51.918929 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:51.918948 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:51.918968 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:51.918987 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:51.919006 | orchestrator | 2026-03-29 02:05:51.919024 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-29 02:05:51.919044 | orchestrator | Sunday 29 March 2026 02:05:26 +0000 (0:00:01.634) 0:07:28.965 ********** 2026-03-29 02:05:51.919063 | orchestrator | changed: [testbed-manager] 2026-03-29 02:05:51.919082 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:51.919100 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:51.919118 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:51.919137 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:51.919157 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:51.919177 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:51.919195 | orchestrator | 2026-03-29 02:05:51.919211 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-29 02:05:51.919227 | orchestrator | Sunday 29 March 2026 02:05:27 +0000 (0:00:01.222) 0:07:30.188 ********** 2026-03-29 02:05:51.919243 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:51.919258 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:51.919271 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:51.919287 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:51.919304 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:51.919320 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:51.919337 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:51.919353 | orchestrator | 2026-03-29 02:05:51.919370 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-29 02:05:51.919386 | orchestrator | Sunday 29 March 2026 02:05:30 +0000 (0:00:02.268) 0:07:32.457 ********** 2026-03-29 02:05:51.919428 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:51.919445 | orchestrator | 2026-03-29 02:05:51.919462 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-29 02:05:51.919478 | orchestrator | Sunday 29 March 2026 02:05:30 +0000 (0:00:00.108) 0:07:32.565 ********** 2026-03-29 02:05:51.919495 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.919511 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:51.919527 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:51.919543 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:51.919560 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:05:51.919576 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:51.919592 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:51.919608 | orchestrator | 2026-03-29 02:05:51.919707 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-29 02:05:51.919725 | orchestrator | Sunday 29 March 2026 02:05:31 +0000 (0:00:01.026) 0:07:33.592 ********** 2026-03-29 02:05:51.919742 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:51.919759 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:51.919792 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:51.919809 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:51.919826 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:51.919842 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:51.919858 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:51.919876 | orchestrator | 2026-03-29 02:05:51.919892 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-29 02:05:51.919908 | orchestrator | Sunday 29 March 2026 02:05:31 +0000 (0:00:00.562) 0:07:34.154 ********** 2026-03-29 02:05:51.919919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:05:51.919932 | orchestrator | 2026-03-29 02:05:51.919942 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-29 02:05:51.919951 | orchestrator | Sunday 29 March 2026 02:05:32 +0000 (0:00:01.157) 0:07:35.312 ********** 2026-03-29 02:05:51.919961 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.919971 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:51.919980 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:51.919990 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:51.919999 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:51.920009 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:51.920018 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:51.920028 | orchestrator | 2026-03-29 02:05:51.920038 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-29 02:05:51.920048 | orchestrator | Sunday 29 March 2026 02:05:33 +0000 (0:00:00.873) 0:07:36.185 ********** 2026-03-29 02:05:51.920057 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-29 02:05:51.920087 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-29 02:05:51.920097 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-29 02:05:51.920107 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-29 02:05:51.920117 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-29 02:05:51.920126 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-29 02:05:51.920136 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-29 02:05:51.920145 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-29 02:05:51.920155 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-29 02:05:51.920164 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-29 02:05:51.920173 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-29 02:05:51.920183 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-29 02:05:51.920209 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-29 02:05:51.920225 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-29 02:05:51.920240 | orchestrator | 2026-03-29 02:05:51.920256 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-29 02:05:51.920272 | orchestrator | Sunday 29 March 2026 02:05:36 +0000 (0:00:02.595) 0:07:38.780 ********** 2026-03-29 02:05:51.920290 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:51.920307 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:51.920322 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:51.920340 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:51.920357 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:51.920373 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:51.920389 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:51.920404 | orchestrator | 2026-03-29 02:05:51.920417 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-29 02:05:51.920432 | orchestrator | Sunday 29 March 2026 02:05:37 +0000 (0:00:01.030) 0:07:39.811 ********** 2026-03-29 02:05:51.920449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:05:51.920467 | orchestrator | 2026-03-29 02:05:51.920483 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-29 02:05:51.920501 | orchestrator | Sunday 29 March 2026 02:05:38 +0000 (0:00:01.008) 0:07:40.819 ********** 2026-03-29 02:05:51.920517 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.920532 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:51.920548 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:51.920562 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:51.920573 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:51.920590 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:51.920606 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:51.920694 | orchestrator | 2026-03-29 02:05:51.920715 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-29 02:05:51.920733 | orchestrator | Sunday 29 March 2026 02:05:39 +0000 (0:00:00.853) 0:07:41.673 ********** 2026-03-29 02:05:51.920750 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.920765 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:51.920775 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:51.920784 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:51.920793 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:51.920803 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:51.920812 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:51.920822 | orchestrator | 2026-03-29 02:05:51.920831 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-29 02:05:51.920841 | orchestrator | Sunday 29 March 2026 02:05:40 +0000 (0:00:01.101) 0:07:42.774 ********** 2026-03-29 02:05:51.920850 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:51.920860 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:51.920869 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:51.920879 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:51.920888 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:51.920898 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:51.920907 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:51.920916 | orchestrator | 2026-03-29 02:05:51.920926 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-29 02:05:51.920936 | orchestrator | Sunday 29 March 2026 02:05:40 +0000 (0:00:00.555) 0:07:43.330 ********** 2026-03-29 02:05:51.920945 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.920960 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:05:51.920976 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:05:51.920992 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:05:51.921007 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:05:51.921024 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:05:51.921054 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:05:51.921071 | orchestrator | 2026-03-29 02:05:51.921087 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-29 02:05:51.921102 | orchestrator | Sunday 29 March 2026 02:05:42 +0000 (0:00:01.719) 0:07:45.049 ********** 2026-03-29 02:05:51.921112 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:05:51.921122 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:05:51.921131 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:05:51.921141 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:05:51.921150 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:05:51.921160 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:05:51.921169 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:05:51.921179 | orchestrator | 2026-03-29 02:05:51.921188 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-29 02:05:51.921198 | orchestrator | Sunday 29 March 2026 02:05:43 +0000 (0:00:00.553) 0:07:45.603 ********** 2026-03-29 02:05:51.921210 | orchestrator | ok: [testbed-manager] 2026-03-29 02:05:51.921226 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:05:51.921250 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:05:51.921269 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:05:51.921283 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:05:51.921298 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:05:51.921325 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:26.449071 | orchestrator | 2026-03-29 02:06:26.449158 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-29 02:06:26.449170 | orchestrator | Sunday 29 March 2026 02:05:51 +0000 (0:00:08.707) 0:07:54.311 ********** 2026-03-29 02:06:26.449178 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449188 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:26.449196 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:26.449203 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:26.449211 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:26.449219 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:26.449227 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:26.449234 | orchestrator | 2026-03-29 02:06:26.449242 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-29 02:06:26.449249 | orchestrator | Sunday 29 March 2026 02:05:53 +0000 (0:00:01.637) 0:07:55.949 ********** 2026-03-29 02:06:26.449256 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449264 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:26.449272 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:26.449280 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:26.449287 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:26.449295 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:26.449302 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:26.449309 | orchestrator | 2026-03-29 02:06:26.449317 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-29 02:06:26.449324 | orchestrator | Sunday 29 March 2026 02:05:55 +0000 (0:00:01.804) 0:07:57.753 ********** 2026-03-29 02:06:26.449331 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449339 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:26.449346 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:26.449353 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:26.449375 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:26.449381 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:26.449388 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:26.449402 | orchestrator | 2026-03-29 02:06:26.449409 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 02:06:26.449415 | orchestrator | Sunday 29 March 2026 02:05:57 +0000 (0:00:01.764) 0:07:59.517 ********** 2026-03-29 02:06:26.449422 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449428 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.449435 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.449442 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.449469 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.449476 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.449482 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.449488 | orchestrator | 2026-03-29 02:06:26.449495 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 02:06:26.449501 | orchestrator | Sunday 29 March 2026 02:05:57 +0000 (0:00:00.864) 0:08:00.382 ********** 2026-03-29 02:06:26.449508 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:06:26.449514 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:06:26.449521 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:06:26.449527 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:06:26.449533 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:06:26.449540 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:06:26.449546 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:06:26.449552 | orchestrator | 2026-03-29 02:06:26.449558 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-29 02:06:26.449565 | orchestrator | Sunday 29 March 2026 02:05:58 +0000 (0:00:00.983) 0:08:01.366 ********** 2026-03-29 02:06:26.449571 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:06:26.449578 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:06:26.449585 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:06:26.449591 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:06:26.449598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:06:26.449605 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:06:26.449612 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:06:26.449652 | orchestrator | 2026-03-29 02:06:26.449660 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-29 02:06:26.449667 | orchestrator | Sunday 29 March 2026 02:05:59 +0000 (0:00:00.510) 0:08:01.876 ********** 2026-03-29 02:06:26.449674 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449696 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.449703 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.449710 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.449717 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.449723 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.449730 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.449736 | orchestrator | 2026-03-29 02:06:26.449747 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-29 02:06:26.449755 | orchestrator | Sunday 29 March 2026 02:05:59 +0000 (0:00:00.499) 0:08:02.375 ********** 2026-03-29 02:06:26.449761 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449768 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.449775 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.449781 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.449788 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.449795 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.449801 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.449808 | orchestrator | 2026-03-29 02:06:26.449815 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-29 02:06:26.449822 | orchestrator | Sunday 29 March 2026 02:06:00 +0000 (0:00:00.525) 0:08:02.901 ********** 2026-03-29 02:06:26.449829 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449836 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.449843 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.449850 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.449856 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.449863 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.449869 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.449876 | orchestrator | 2026-03-29 02:06:26.449883 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-29 02:06:26.449890 | orchestrator | Sunday 29 March 2026 02:06:01 +0000 (0:00:00.788) 0:08:03.690 ********** 2026-03-29 02:06:26.449898 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.449904 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.449912 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.449927 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.449934 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.449941 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.449949 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.449955 | orchestrator | 2026-03-29 02:06:26.449980 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-29 02:06:26.449987 | orchestrator | Sunday 29 March 2026 02:06:06 +0000 (0:00:05.239) 0:08:08.929 ********** 2026-03-29 02:06:26.449993 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:06:26.449999 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:06:26.450006 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:06:26.450013 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:06:26.450067 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:06:26.450074 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:06:26.450082 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:06:26.450088 | orchestrator | 2026-03-29 02:06:26.450095 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-29 02:06:26.450101 | orchestrator | Sunday 29 March 2026 02:06:07 +0000 (0:00:00.616) 0:08:09.546 ********** 2026-03-29 02:06:26.450110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:26.450118 | orchestrator | 2026-03-29 02:06:26.450125 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-29 02:06:26.450131 | orchestrator | Sunday 29 March 2026 02:06:08 +0000 (0:00:01.199) 0:08:10.745 ********** 2026-03-29 02:06:26.450138 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.450145 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.450151 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.450158 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.450166 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.450173 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.450179 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.450186 | orchestrator | 2026-03-29 02:06:26.450193 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-29 02:06:26.450200 | orchestrator | Sunday 29 March 2026 02:06:10 +0000 (0:00:02.197) 0:08:12.943 ********** 2026-03-29 02:06:26.450207 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.450215 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.450222 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.450230 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.450237 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.450244 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.450251 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.450258 | orchestrator | 2026-03-29 02:06:26.450265 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-29 02:06:26.450273 | orchestrator | Sunday 29 March 2026 02:06:11 +0000 (0:00:01.240) 0:08:14.184 ********** 2026-03-29 02:06:26.450280 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:26.450287 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:26.450294 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:26.450301 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:26.450308 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:26.450316 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:26.450323 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:26.450330 | orchestrator | 2026-03-29 02:06:26.450338 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-29 02:06:26.450345 | orchestrator | Sunday 29 March 2026 02:06:12 +0000 (0:00:00.873) 0:08:15.057 ********** 2026-03-29 02:06:26.450352 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450361 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450378 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450385 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450392 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450404 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450411 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 02:06:26.450418 | orchestrator | 2026-03-29 02:06:26.450425 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-29 02:06:26.450432 | orchestrator | Sunday 29 March 2026 02:06:14 +0000 (0:00:02.194) 0:08:17.252 ********** 2026-03-29 02:06:26.450440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:26.450447 | orchestrator | 2026-03-29 02:06:26.450454 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-29 02:06:26.450462 | orchestrator | Sunday 29 March 2026 02:06:15 +0000 (0:00:01.033) 0:08:18.285 ********** 2026-03-29 02:06:26.450469 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:26.450476 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:26.450484 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:26.450491 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:26.450499 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:26.450506 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:26.450513 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:26.450520 | orchestrator | 2026-03-29 02:06:26.450536 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-29 02:06:59.544865 | orchestrator | Sunday 29 March 2026 02:06:26 +0000 (0:00:10.548) 0:08:28.834 ********** 2026-03-29 02:06:59.544956 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:59.544973 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:59.544984 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:59.544993 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:59.545002 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:59.545010 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:59.545019 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:59.545028 | orchestrator | 2026-03-29 02:06:59.545038 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-29 02:06:59.545048 | orchestrator | Sunday 29 March 2026 02:06:28 +0000 (0:00:02.068) 0:08:30.903 ********** 2026-03-29 02:06:59.545057 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:59.545065 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:59.545073 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:59.545082 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:59.545090 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:59.545099 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:59.545108 | orchestrator | 2026-03-29 02:06:59.545117 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-29 02:06:59.545126 | orchestrator | Sunday 29 March 2026 02:06:29 +0000 (0:00:01.414) 0:08:32.317 ********** 2026-03-29 02:06:59.545135 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.545145 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.545153 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.545162 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.545171 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.545201 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.545211 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.545219 | orchestrator | 2026-03-29 02:06:59.545228 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-29 02:06:59.545237 | orchestrator | 2026-03-29 02:06:59.545246 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-29 02:06:59.545255 | orchestrator | Sunday 29 March 2026 02:06:31 +0000 (0:00:01.305) 0:08:33.623 ********** 2026-03-29 02:06:59.545264 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:06:59.545273 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:06:59.545282 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:06:59.545290 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:06:59.545299 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:06:59.545308 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:06:59.545316 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:06:59.545325 | orchestrator | 2026-03-29 02:06:59.545333 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-29 02:06:59.545341 | orchestrator | 2026-03-29 02:06:59.545348 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-29 02:06:59.545356 | orchestrator | Sunday 29 March 2026 02:06:32 +0000 (0:00:00.803) 0:08:34.426 ********** 2026-03-29 02:06:59.545364 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.545371 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.545379 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.545387 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.545395 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.545403 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.545411 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.545419 | orchestrator | 2026-03-29 02:06:59.545426 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-29 02:06:59.545434 | orchestrator | Sunday 29 March 2026 02:06:33 +0000 (0:00:01.381) 0:08:35.808 ********** 2026-03-29 02:06:59.545442 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:59.545451 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:59.545458 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:59.545467 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:59.545475 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:59.545483 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:59.545491 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:59.545499 | orchestrator | 2026-03-29 02:06:59.545508 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-29 02:06:59.545516 | orchestrator | Sunday 29 March 2026 02:06:34 +0000 (0:00:01.512) 0:08:37.321 ********** 2026-03-29 02:06:59.545525 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:06:59.545533 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:06:59.545542 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:06:59.545551 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:06:59.545559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:06:59.545568 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:06:59.545590 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:06:59.545600 | orchestrator | 2026-03-29 02:06:59.545609 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-29 02:06:59.545643 | orchestrator | Sunday 29 March 2026 02:06:35 +0000 (0:00:00.484) 0:08:37.806 ********** 2026-03-29 02:06:59.545653 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:59.545663 | orchestrator | 2026-03-29 02:06:59.545672 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-29 02:06:59.545680 | orchestrator | Sunday 29 March 2026 02:06:36 +0000 (0:00:00.987) 0:08:38.793 ********** 2026-03-29 02:06:59.545691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:59.545712 | orchestrator | 2026-03-29 02:06:59.545721 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-29 02:06:59.545729 | orchestrator | Sunday 29 March 2026 02:06:37 +0000 (0:00:00.790) 0:08:39.584 ********** 2026-03-29 02:06:59.545738 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.545746 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.545755 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.545764 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.545773 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.545782 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.545791 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.545799 | orchestrator | 2026-03-29 02:06:59.545827 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-29 02:06:59.545837 | orchestrator | Sunday 29 March 2026 02:06:47 +0000 (0:00:10.197) 0:08:49.781 ********** 2026-03-29 02:06:59.545845 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.545854 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.545862 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.545871 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.545879 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.545888 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.545897 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.545905 | orchestrator | 2026-03-29 02:06:59.545914 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-29 02:06:59.545922 | orchestrator | Sunday 29 March 2026 02:06:48 +0000 (0:00:01.133) 0:08:50.915 ********** 2026-03-29 02:06:59.545930 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.545939 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.545947 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.545956 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.545965 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.545973 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.545982 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.545991 | orchestrator | 2026-03-29 02:06:59.546000 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-29 02:06:59.546009 | orchestrator | Sunday 29 March 2026 02:06:49 +0000 (0:00:01.418) 0:08:52.334 ********** 2026-03-29 02:06:59.546070 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.546080 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.546088 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.546097 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.546105 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.546114 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.546122 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.546131 | orchestrator | 2026-03-29 02:06:59.546140 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-29 02:06:59.546149 | orchestrator | Sunday 29 March 2026 02:06:51 +0000 (0:00:01.962) 0:08:54.296 ********** 2026-03-29 02:06:59.546158 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.546167 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.546176 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.546184 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.546193 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.546202 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.546211 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.546219 | orchestrator | 2026-03-29 02:06:59.546228 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-29 02:06:59.546237 | orchestrator | Sunday 29 March 2026 02:06:53 +0000 (0:00:01.293) 0:08:55.590 ********** 2026-03-29 02:06:59.546246 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.546254 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.546263 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.546281 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.546291 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.546299 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.546308 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.546316 | orchestrator | 2026-03-29 02:06:59.546325 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-29 02:06:59.546333 | orchestrator | 2026-03-29 02:06:59.546341 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-29 02:06:59.546349 | orchestrator | Sunday 29 March 2026 02:06:54 +0000 (0:00:01.186) 0:08:56.776 ********** 2026-03-29 02:06:59.546357 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:59.546364 | orchestrator | 2026-03-29 02:06:59.546372 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 02:06:59.546380 | orchestrator | Sunday 29 March 2026 02:06:55 +0000 (0:00:00.864) 0:08:57.640 ********** 2026-03-29 02:06:59.546388 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:59.546397 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:59.546406 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:59.546415 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:59.546423 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:59.546431 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:59.546440 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:59.546448 | orchestrator | 2026-03-29 02:06:59.546464 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 02:06:59.546474 | orchestrator | Sunday 29 March 2026 02:06:56 +0000 (0:00:01.126) 0:08:58.767 ********** 2026-03-29 02:06:59.546483 | orchestrator | changed: [testbed-manager] 2026-03-29 02:06:59.546493 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:06:59.546502 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:06:59.546511 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:06:59.546520 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:06:59.546529 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:06:59.546538 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:06:59.546548 | orchestrator | 2026-03-29 02:06:59.546557 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-29 02:06:59.546566 | orchestrator | Sunday 29 March 2026 02:06:57 +0000 (0:00:01.203) 0:08:59.970 ********** 2026-03-29 02:06:59.546575 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:06:59.546584 | orchestrator | 2026-03-29 02:06:59.546593 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 02:06:59.546602 | orchestrator | Sunday 29 March 2026 02:06:58 +0000 (0:00:01.083) 0:09:01.054 ********** 2026-03-29 02:06:59.546612 | orchestrator | ok: [testbed-manager] 2026-03-29 02:06:59.546653 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:06:59.546661 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:06:59.546669 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:06:59.546677 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:06:59.546686 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:06:59.546695 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:06:59.546704 | orchestrator | 2026-03-29 02:06:59.546726 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 02:07:01.229831 | orchestrator | Sunday 29 March 2026 02:06:59 +0000 (0:00:00.881) 0:09:01.935 ********** 2026-03-29 02:07:01.229936 | orchestrator | changed: [testbed-manager] 2026-03-29 02:07:01.229951 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:07:01.229963 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:07:01.229974 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:07:01.229985 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:07:01.229996 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:07:01.230007 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:07:01.230078 | orchestrator | 2026-03-29 02:07:01.230114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:07:01.230127 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-29 02:07:01.230140 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 02:07:01.230151 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 02:07:01.230162 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 02:07:01.230173 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-29 02:07:01.230184 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 02:07:01.230195 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 02:07:01.230206 | orchestrator | 2026-03-29 02:07:01.230217 | orchestrator | 2026-03-29 02:07:01.230227 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:07:01.230239 | orchestrator | Sunday 29 March 2026 02:07:00 +0000 (0:00:01.140) 0:09:03.075 ********** 2026-03-29 02:07:01.230252 | orchestrator | =============================================================================== 2026-03-29 02:07:01.230272 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.35s 2026-03-29 02:07:01.230292 | orchestrator | osism.commons.packages : Download required packages -------------------- 72.18s 2026-03-29 02:07:01.230313 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.71s 2026-03-29 02:07:01.230332 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.00s 2026-03-29 02:07:01.230352 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.45s 2026-03-29 02:07:01.230371 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.08s 2026-03-29 02:07:01.230390 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.04s 2026-03-29 02:07:01.230409 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.78s 2026-03-29 02:07:01.230428 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.55s 2026-03-29 02:07:01.230448 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.21s 2026-03-29 02:07:01.230468 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.20s 2026-03-29 02:07:01.230487 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.23s 2026-03-29 02:07:01.230505 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.12s 2026-03-29 02:07:01.230542 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.10s 2026-03-29 02:07:01.230561 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.71s 2026-03-29 02:07:01.230579 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.61s 2026-03-29 02:07:01.230600 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.95s 2026-03-29 02:07:01.230651 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.15s 2026-03-29 02:07:01.230671 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.90s 2026-03-29 02:07:01.230691 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.43s 2026-03-29 02:07:01.562209 | orchestrator | + osism apply fail2ban 2026-03-29 02:07:14.092104 | orchestrator | 2026-03-29 02:07:14 | INFO  | Task f3de60ca-f93e-4f2f-b889-35d83330a842 (fail2ban) was prepared for execution. 2026-03-29 02:07:14.092198 | orchestrator | 2026-03-29 02:07:14 | INFO  | It takes a moment until task f3de60ca-f93e-4f2f-b889-35d83330a842 (fail2ban) has been started and output is visible here. 2026-03-29 02:07:36.734166 | orchestrator | 2026-03-29 02:07:36.734287 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-29 02:07:36.734300 | orchestrator | 2026-03-29 02:07:36.734308 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-29 02:07:36.734315 | orchestrator | Sunday 29 March 2026 02:07:18 +0000 (0:00:00.255) 0:00:00.255 ********** 2026-03-29 02:07:36.734325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:07:36.734333 | orchestrator | 2026-03-29 02:07:36.734341 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-29 02:07:36.734348 | orchestrator | Sunday 29 March 2026 02:07:19 +0000 (0:00:01.097) 0:00:01.352 ********** 2026-03-29 02:07:36.735216 | orchestrator | changed: [testbed-manager] 2026-03-29 02:07:36.735271 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:07:36.735281 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:07:36.735289 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:07:36.735297 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:07:36.735304 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:07:36.735311 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:07:36.735319 | orchestrator | 2026-03-29 02:07:36.735329 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-29 02:07:36.735339 | orchestrator | Sunday 29 March 2026 02:07:31 +0000 (0:00:11.914) 0:00:13.267 ********** 2026-03-29 02:07:36.735351 | orchestrator | changed: [testbed-manager] 2026-03-29 02:07:36.735364 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:07:36.735372 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:07:36.735379 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:07:36.735385 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:07:36.735392 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:07:36.735398 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:07:36.735404 | orchestrator | 2026-03-29 02:07:36.735411 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-29 02:07:36.735417 | orchestrator | Sunday 29 March 2026 02:07:33 +0000 (0:00:01.552) 0:00:14.819 ********** 2026-03-29 02:07:36.735423 | orchestrator | ok: [testbed-manager] 2026-03-29 02:07:36.735430 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:07:36.735437 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:07:36.735443 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:07:36.735449 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:07:36.735455 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:07:36.735462 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:07:36.735468 | orchestrator | 2026-03-29 02:07:36.735474 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-29 02:07:36.735480 | orchestrator | Sunday 29 March 2026 02:07:34 +0000 (0:00:01.562) 0:00:16.382 ********** 2026-03-29 02:07:36.735487 | orchestrator | changed: [testbed-manager] 2026-03-29 02:07:36.735493 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:07:36.735499 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:07:36.735505 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:07:36.735515 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:07:36.735525 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:07:36.735535 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:07:36.735546 | orchestrator | 2026-03-29 02:07:36.735555 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:07:36.735566 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735606 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735645 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735656 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735667 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735675 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735682 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:07:36.735692 | orchestrator | 2026-03-29 02:07:36.735704 | orchestrator | 2026-03-29 02:07:36.735715 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:07:36.735725 | orchestrator | Sunday 29 March 2026 02:07:36 +0000 (0:00:01.622) 0:00:18.005 ********** 2026-03-29 02:07:36.735736 | orchestrator | =============================================================================== 2026-03-29 02:07:36.735747 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.91s 2026-03-29 02:07:36.735758 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-03-29 02:07:36.735768 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.56s 2026-03-29 02:07:36.735776 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.55s 2026-03-29 02:07:36.735783 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-03-29 02:07:37.039251 | orchestrator | + osism apply network 2026-03-29 02:07:49.072411 | orchestrator | 2026-03-29 02:07:49 | INFO  | Task 268ab246-43bf-4329-9d91-c2a5427e86ae (network) was prepared for execution. 2026-03-29 02:07:49.072521 | orchestrator | 2026-03-29 02:07:49 | INFO  | It takes a moment until task 268ab246-43bf-4329-9d91-c2a5427e86ae (network) has been started and output is visible here. 2026-03-29 02:08:17.359899 | orchestrator | 2026-03-29 02:08:17.360006 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-29 02:08:17.360020 | orchestrator | 2026-03-29 02:08:17.360030 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-29 02:08:17.360040 | orchestrator | Sunday 29 March 2026 02:07:52 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-29 02:08:17.360049 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.360059 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.360068 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.360077 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.360086 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.360094 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.360103 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.360112 | orchestrator | 2026-03-29 02:08:17.360121 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-29 02:08:17.360130 | orchestrator | Sunday 29 March 2026 02:07:53 +0000 (0:00:00.538) 0:00:00.732 ********** 2026-03-29 02:08:17.360141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:08:17.360152 | orchestrator | 2026-03-29 02:08:17.360161 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-29 02:08:17.360170 | orchestrator | Sunday 29 March 2026 02:07:54 +0000 (0:00:00.884) 0:00:01.616 ********** 2026-03-29 02:08:17.360200 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.360210 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.360218 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.360227 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.360235 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.360244 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.360253 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.360262 | orchestrator | 2026-03-29 02:08:17.360270 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-29 02:08:17.360279 | orchestrator | Sunday 29 March 2026 02:07:56 +0000 (0:00:02.229) 0:00:03.846 ********** 2026-03-29 02:08:17.360288 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.360297 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.360305 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.360314 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.360323 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.360331 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.360340 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.360348 | orchestrator | 2026-03-29 02:08:17.360357 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-29 02:08:17.360366 | orchestrator | Sunday 29 March 2026 02:07:58 +0000 (0:00:01.761) 0:00:05.607 ********** 2026-03-29 02:08:17.360375 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-29 02:08:17.360384 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-29 02:08:17.360393 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-29 02:08:17.360401 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-29 02:08:17.360410 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-29 02:08:17.360419 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-29 02:08:17.360427 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-29 02:08:17.360436 | orchestrator | 2026-03-29 02:08:17.360462 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-29 02:08:17.360472 | orchestrator | Sunday 29 March 2026 02:07:59 +0000 (0:00:01.014) 0:00:06.621 ********** 2026-03-29 02:08:17.360482 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 02:08:17.360493 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 02:08:17.360503 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:08:17.360513 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 02:08:17.360524 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 02:08:17.360534 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 02:08:17.360544 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 02:08:17.360554 | orchestrator | 2026-03-29 02:08:17.360564 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-29 02:08:17.360573 | orchestrator | Sunday 29 March 2026 02:08:02 +0000 (0:00:03.170) 0:00:09.792 ********** 2026-03-29 02:08:17.360583 | orchestrator | changed: [testbed-manager] 2026-03-29 02:08:17.360594 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:08:17.360603 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:08:17.360636 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:08:17.360653 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:08:17.360668 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:08:17.360679 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:08:17.360689 | orchestrator | 2026-03-29 02:08:17.360699 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-29 02:08:17.360709 | orchestrator | Sunday 29 March 2026 02:08:04 +0000 (0:00:01.630) 0:00:11.423 ********** 2026-03-29 02:08:17.360719 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:08:17.360727 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 02:08:17.360736 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 02:08:17.360745 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 02:08:17.360753 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 02:08:17.360762 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 02:08:17.360779 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 02:08:17.360788 | orchestrator | 2026-03-29 02:08:17.360797 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-29 02:08:17.360805 | orchestrator | Sunday 29 March 2026 02:08:05 +0000 (0:00:01.599) 0:00:13.022 ********** 2026-03-29 02:08:17.360814 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.360823 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.360832 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.360840 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.360849 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.360858 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.360866 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.360875 | orchestrator | 2026-03-29 02:08:17.360884 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-29 02:08:17.360909 | orchestrator | Sunday 29 March 2026 02:08:06 +0000 (0:00:01.147) 0:00:14.170 ********** 2026-03-29 02:08:17.360918 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:08:17.360927 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:17.360935 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:17.360944 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:17.360952 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:17.360961 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:17.360969 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:17.360978 | orchestrator | 2026-03-29 02:08:17.360987 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-29 02:08:17.360995 | orchestrator | Sunday 29 March 2026 02:08:07 +0000 (0:00:00.634) 0:00:14.804 ********** 2026-03-29 02:08:17.361004 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.361013 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.361021 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.361030 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.361039 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.361047 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.361056 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.361065 | orchestrator | 2026-03-29 02:08:17.361073 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-29 02:08:17.361082 | orchestrator | Sunday 29 March 2026 02:08:10 +0000 (0:00:02.572) 0:00:17.377 ********** 2026-03-29 02:08:17.361091 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:17.361100 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:17.361108 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:17.361117 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:17.361125 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:17.361134 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:17.361143 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-29 02:08:17.361153 | orchestrator | 2026-03-29 02:08:17.361162 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-29 02:08:17.361171 | orchestrator | Sunday 29 March 2026 02:08:10 +0000 (0:00:00.947) 0:00:18.325 ********** 2026-03-29 02:08:17.361179 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.361188 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:08:17.361196 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:08:17.361205 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:08:17.361214 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:08:17.361222 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:08:17.361231 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:08:17.361239 | orchestrator | 2026-03-29 02:08:17.361248 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-29 02:08:17.361257 | orchestrator | Sunday 29 March 2026 02:08:12 +0000 (0:00:01.845) 0:00:20.171 ********** 2026-03-29 02:08:17.361266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:08:17.361283 | orchestrator | 2026-03-29 02:08:17.361291 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 02:08:17.361300 | orchestrator | Sunday 29 March 2026 02:08:14 +0000 (0:00:01.278) 0:00:21.449 ********** 2026-03-29 02:08:17.361308 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.361317 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.361326 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.361334 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.361343 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.361351 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.361360 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.361368 | orchestrator | 2026-03-29 02:08:17.361377 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-29 02:08:17.361386 | orchestrator | Sunday 29 March 2026 02:08:15 +0000 (0:00:01.103) 0:00:22.553 ********** 2026-03-29 02:08:17.361394 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:17.361403 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:17.361411 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:17.361420 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:17.361428 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:17.361437 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:17.361445 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:17.361454 | orchestrator | 2026-03-29 02:08:17.361463 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 02:08:17.361471 | orchestrator | Sunday 29 March 2026 02:08:15 +0000 (0:00:00.816) 0:00:23.369 ********** 2026-03-29 02:08:17.361480 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361494 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361503 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361511 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361520 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361529 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361537 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361546 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361554 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361563 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361571 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361580 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361589 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 02:08:17.361597 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 02:08:17.361606 | orchestrator | 2026-03-29 02:08:17.361657 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-29 02:08:32.889442 | orchestrator | Sunday 29 March 2026 02:08:17 +0000 (0:00:01.343) 0:00:24.713 ********** 2026-03-29 02:08:32.889553 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:08:32.889570 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:32.889582 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:32.889593 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:32.889604 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:32.889669 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:32.889686 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:32.889697 | orchestrator | 2026-03-29 02:08:32.889708 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-29 02:08:32.889743 | orchestrator | Sunday 29 March 2026 02:08:18 +0000 (0:00:00.719) 0:00:25.432 ********** 2026-03-29 02:08:32.889758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-4, testbed-node-2, testbed-node-3 2026-03-29 02:08:32.889780 | orchestrator | 2026-03-29 02:08:32.889797 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-29 02:08:32.889814 | orchestrator | Sunday 29 March 2026 02:08:22 +0000 (0:00:04.473) 0:00:29.906 ********** 2026-03-29 02:08:32.889835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889871 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.889925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.889997 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890226 | orchestrator | 2026-03-29 02:08:32.890240 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-29 02:08:32.890252 | orchestrator | Sunday 29 March 2026 02:08:27 +0000 (0:00:04.952) 0:00:34.859 ********** 2026-03-29 02:08:32.890266 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890294 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-29 02:08:32.890385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:32.890437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:38.879014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-29 02:08:38.879122 | orchestrator | 2026-03-29 02:08:38.879139 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-29 02:08:38.879151 | orchestrator | Sunday 29 March 2026 02:08:32 +0000 (0:00:05.383) 0:00:40.243 ********** 2026-03-29 02:08:38.879163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:08:38.879174 | orchestrator | 2026-03-29 02:08:38.879184 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 02:08:38.879195 | orchestrator | Sunday 29 March 2026 02:08:34 +0000 (0:00:01.172) 0:00:41.416 ********** 2026-03-29 02:08:38.879205 | orchestrator | ok: [testbed-manager] 2026-03-29 02:08:38.879216 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:08:38.879226 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:08:38.879235 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:08:38.879245 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:08:38.879255 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:08:38.879264 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:08:38.879274 | orchestrator | 2026-03-29 02:08:38.879284 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 02:08:38.879294 | orchestrator | Sunday 29 March 2026 02:08:35 +0000 (0:00:01.105) 0:00:42.522 ********** 2026-03-29 02:08:38.879304 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879315 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879325 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879334 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879344 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:08:38.879355 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879365 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879375 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879385 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879395 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:38.879405 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879414 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879424 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879434 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879444 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:38.879475 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879486 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879496 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879505 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879515 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:38.879538 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879549 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879561 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879572 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879584 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:38.879595 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879606 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879677 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879689 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879699 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:38.879708 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 02:08:38.879718 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 02:08:38.879728 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 02:08:38.879737 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 02:08:38.879747 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:38.879757 | orchestrator | 2026-03-29 02:08:38.879766 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-29 02:08:38.879793 | orchestrator | Sunday 29 March 2026 02:08:37 +0000 (0:00:01.881) 0:00:44.403 ********** 2026-03-29 02:08:38.879804 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:08:38.879814 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:38.879823 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:38.879833 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:38.879842 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:38.879852 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:38.879861 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:38.879871 | orchestrator | 2026-03-29 02:08:38.879880 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-29 02:08:38.879890 | orchestrator | Sunday 29 March 2026 02:08:37 +0000 (0:00:00.640) 0:00:45.044 ********** 2026-03-29 02:08:38.879899 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:08:38.879909 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:08:38.879918 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:08:38.879928 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:08:38.879937 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:08:38.879947 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:08:38.879957 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:08:38.879966 | orchestrator | 2026-03-29 02:08:38.879976 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:08:38.879987 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 02:08:38.879998 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880017 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880027 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880036 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880046 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880056 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 02:08:38.880065 | orchestrator | 2026-03-29 02:08:38.880075 | orchestrator | 2026-03-29 02:08:38.880085 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:08:38.880095 | orchestrator | Sunday 29 March 2026 02:08:38 +0000 (0:00:00.756) 0:00:45.800 ********** 2026-03-29 02:08:38.880104 | orchestrator | =============================================================================== 2026-03-29 02:08:38.880114 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.38s 2026-03-29 02:08:38.880123 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.95s 2026-03-29 02:08:38.880133 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.47s 2026-03-29 02:08:38.880142 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.17s 2026-03-29 02:08:38.880152 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.57s 2026-03-29 02:08:38.880161 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.23s 2026-03-29 02:08:38.880171 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.88s 2026-03-29 02:08:38.880180 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.85s 2026-03-29 02:08:38.880195 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.76s 2026-03-29 02:08:38.880205 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-03-29 02:08:38.880214 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.60s 2026-03-29 02:08:38.880224 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-03-29 02:08:38.880233 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-03-29 02:08:38.880243 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.17s 2026-03-29 02:08:38.880252 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2026-03-29 02:08:38.880262 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-03-29 02:08:38.880271 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-03-29 02:08:38.880281 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2026-03-29 02:08:38.880290 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-03-29 02:08:38.880300 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.88s 2026-03-29 02:08:39.197307 | orchestrator | + osism apply wireguard 2026-03-29 02:08:51.286125 | orchestrator | 2026-03-29 02:08:51 | INFO  | Task 178d06f1-b454-4261-973b-e4ac9c9588c1 (wireguard) was prepared for execution. 2026-03-29 02:08:51.286256 | orchestrator | 2026-03-29 02:08:51 | INFO  | It takes a moment until task 178d06f1-b454-4261-973b-e4ac9c9588c1 (wireguard) has been started and output is visible here. 2026-03-29 02:09:09.805770 | orchestrator | 2026-03-29 02:09:09.805917 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-29 02:09:09.805984 | orchestrator | 2026-03-29 02:09:09.806007 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-29 02:09:09.806120 | orchestrator | Sunday 29 March 2026 02:08:55 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-29 02:09:09.806142 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:09.806161 | orchestrator | 2026-03-29 02:09:09.806178 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-29 02:09:09.806197 | orchestrator | Sunday 29 March 2026 02:08:56 +0000 (0:00:01.148) 0:00:01.315 ********** 2026-03-29 02:09:09.806217 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806236 | orchestrator | 2026-03-29 02:09:09.806254 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-29 02:09:09.806277 | orchestrator | Sunday 29 March 2026 02:09:02 +0000 (0:00:05.829) 0:00:07.144 ********** 2026-03-29 02:09:09.806298 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806317 | orchestrator | 2026-03-29 02:09:09.806337 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-29 02:09:09.806357 | orchestrator | Sunday 29 March 2026 02:09:02 +0000 (0:00:00.545) 0:00:07.690 ********** 2026-03-29 02:09:09.806377 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806398 | orchestrator | 2026-03-29 02:09:09.806419 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-29 02:09:09.806439 | orchestrator | Sunday 29 March 2026 02:09:03 +0000 (0:00:00.425) 0:00:08.115 ********** 2026-03-29 02:09:09.806462 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:09.806481 | orchestrator | 2026-03-29 02:09:09.806501 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-29 02:09:09.806521 | orchestrator | Sunday 29 March 2026 02:09:03 +0000 (0:00:00.696) 0:00:08.812 ********** 2026-03-29 02:09:09.806541 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:09.806560 | orchestrator | 2026-03-29 02:09:09.806581 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-29 02:09:09.806600 | orchestrator | Sunday 29 March 2026 02:09:04 +0000 (0:00:00.458) 0:00:09.271 ********** 2026-03-29 02:09:09.806650 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:09.806671 | orchestrator | 2026-03-29 02:09:09.806689 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-29 02:09:09.806707 | orchestrator | Sunday 29 March 2026 02:09:04 +0000 (0:00:00.415) 0:00:09.687 ********** 2026-03-29 02:09:09.806725 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806744 | orchestrator | 2026-03-29 02:09:09.806761 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-29 02:09:09.806778 | orchestrator | Sunday 29 March 2026 02:09:05 +0000 (0:00:01.174) 0:00:10.862 ********** 2026-03-29 02:09:09.806795 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 02:09:09.806814 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806831 | orchestrator | 2026-03-29 02:09:09.806849 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-29 02:09:09.806866 | orchestrator | Sunday 29 March 2026 02:09:06 +0000 (0:00:00.924) 0:00:11.786 ********** 2026-03-29 02:09:09.806882 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806900 | orchestrator | 2026-03-29 02:09:09.806919 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-29 02:09:09.806937 | orchestrator | Sunday 29 March 2026 02:09:08 +0000 (0:00:01.702) 0:00:13.488 ********** 2026-03-29 02:09:09.806956 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:09.806974 | orchestrator | 2026-03-29 02:09:09.806993 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:09:09.807012 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:09:09.807032 | orchestrator | 2026-03-29 02:09:09.807050 | orchestrator | 2026-03-29 02:09:09.807069 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:09:09.807087 | orchestrator | Sunday 29 March 2026 02:09:09 +0000 (0:00:00.914) 0:00:14.403 ********** 2026-03-29 02:09:09.807125 | orchestrator | =============================================================================== 2026-03-29 02:09:09.807144 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.83s 2026-03-29 02:09:09.807163 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.70s 2026-03-29 02:09:09.807182 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-03-29 02:09:09.807200 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.15s 2026-03-29 02:09:09.807220 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2026-03-29 02:09:09.807238 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-29 02:09:09.807256 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-03-29 02:09:09.807275 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-29 02:09:09.807294 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-03-29 02:09:09.807312 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-03-29 02:09:09.807331 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-29 02:09:10.120196 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-29 02:09:10.147958 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-29 02:09:10.148041 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-29 02:09:10.219999 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2026-03-29 02:09:10.235999 | orchestrator | + osism apply --environment custom workarounds 2026-03-29 02:09:12.256033 | orchestrator | 2026-03-29 02:09:12 | INFO  | Trying to run play workarounds in environment custom 2026-03-29 02:09:22.352053 | orchestrator | 2026-03-29 02:09:22 | INFO  | Task 31ba52a4-2c45-4a91-a1c4-5c28229c7ec2 (workarounds) was prepared for execution. 2026-03-29 02:09:22.352208 | orchestrator | 2026-03-29 02:09:22 | INFO  | It takes a moment until task 31ba52a4-2c45-4a91-a1c4-5c28229c7ec2 (workarounds) has been started and output is visible here. 2026-03-29 02:09:49.513369 | orchestrator | 2026-03-29 02:09:49.513523 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:09:49.513551 | orchestrator | 2026-03-29 02:09:49.513572 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-29 02:09:49.513589 | orchestrator | Sunday 29 March 2026 02:09:26 +0000 (0:00:00.128) 0:00:00.128 ********** 2026-03-29 02:09:49.514693 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514760 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514780 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514799 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514839 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514873 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514885 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-29 02:09:49.514896 | orchestrator | 2026-03-29 02:09:49.514908 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-29 02:09:49.514919 | orchestrator | 2026-03-29 02:09:49.514930 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 02:09:49.514942 | orchestrator | Sunday 29 March 2026 02:09:27 +0000 (0:00:00.807) 0:00:00.936 ********** 2026-03-29 02:09:49.514953 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:49.514964 | orchestrator | 2026-03-29 02:09:49.514975 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-29 02:09:49.515015 | orchestrator | 2026-03-29 02:09:49.515026 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 02:09:49.515037 | orchestrator | Sunday 29 March 2026 02:09:30 +0000 (0:00:02.529) 0:00:03.466 ********** 2026-03-29 02:09:49.515050 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:09:49.515068 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:09:49.515092 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:09:49.515117 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:09:49.515135 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:09:49.515153 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:09:49.515170 | orchestrator | 2026-03-29 02:09:49.515187 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-29 02:09:49.515205 | orchestrator | 2026-03-29 02:09:49.515221 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-29 02:09:49.515240 | orchestrator | Sunday 29 March 2026 02:09:32 +0000 (0:00:01.897) 0:00:05.364 ********** 2026-03-29 02:09:49.515260 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515278 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515295 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515311 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515329 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515364 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 02:09:49.515384 | orchestrator | 2026-03-29 02:09:49.515402 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-29 02:09:49.515421 | orchestrator | Sunday 29 March 2026 02:09:33 +0000 (0:00:01.533) 0:00:06.897 ********** 2026-03-29 02:09:49.515439 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:09:49.515458 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:09:49.515476 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:09:49.515492 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:09:49.515511 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:09:49.515531 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:09:49.515550 | orchestrator | 2026-03-29 02:09:49.515568 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-29 02:09:49.515586 | orchestrator | Sunday 29 March 2026 02:09:37 +0000 (0:00:03.713) 0:00:10.610 ********** 2026-03-29 02:09:49.515597 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:09:49.515608 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:09:49.515619 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:09:49.515660 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:09:49.515672 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:09:49.515683 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:09:49.515693 | orchestrator | 2026-03-29 02:09:49.515704 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-29 02:09:49.515715 | orchestrator | 2026-03-29 02:09:49.515726 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-29 02:09:49.515737 | orchestrator | Sunday 29 March 2026 02:09:37 +0000 (0:00:00.714) 0:00:11.324 ********** 2026-03-29 02:09:49.515747 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:09:49.515758 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:09:49.515769 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:09:49.515780 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:09:49.515791 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:09:49.515801 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:49.515812 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:09:49.515823 | orchestrator | 2026-03-29 02:09:49.515864 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-29 02:09:49.515888 | orchestrator | Sunday 29 March 2026 02:09:39 +0000 (0:00:01.657) 0:00:12.982 ********** 2026-03-29 02:09:49.515906 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:09:49.515923 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:09:49.515940 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:09:49.515957 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:09:49.515975 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:09:49.515993 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:09:49.516046 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:49.516068 | orchestrator | 2026-03-29 02:09:49.516081 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-29 02:09:49.516092 | orchestrator | Sunday 29 March 2026 02:09:41 +0000 (0:00:01.580) 0:00:14.562 ********** 2026-03-29 02:09:49.516102 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:09:49.516114 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:09:49.516125 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:09:49.516135 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:09:49.516146 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:09:49.516157 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:09:49.516167 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:49.516178 | orchestrator | 2026-03-29 02:09:49.516189 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-29 02:09:49.516200 | orchestrator | Sunday 29 March 2026 02:09:42 +0000 (0:00:01.518) 0:00:16.080 ********** 2026-03-29 02:09:49.516211 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:09:49.516221 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:09:49.516232 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:09:49.516243 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:09:49.516253 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:09:49.516264 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:09:49.516275 | orchestrator | changed: [testbed-manager] 2026-03-29 02:09:49.516288 | orchestrator | 2026-03-29 02:09:49.516312 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-29 02:09:49.516338 | orchestrator | Sunday 29 March 2026 02:09:44 +0000 (0:00:01.777) 0:00:17.858 ********** 2026-03-29 02:09:49.516356 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:09:49.516373 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:09:49.516391 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:09:49.516407 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:09:49.516426 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:09:49.516445 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:09:49.516463 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:09:49.516480 | orchestrator | 2026-03-29 02:09:49.516499 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-29 02:09:49.516518 | orchestrator | 2026-03-29 02:09:49.516537 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-29 02:09:49.516556 | orchestrator | Sunday 29 March 2026 02:09:45 +0000 (0:00:00.618) 0:00:18.476 ********** 2026-03-29 02:09:49.516574 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:09:49.516592 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:09:49.516610 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:09:49.516699 | orchestrator | ok: [testbed-manager] 2026-03-29 02:09:49.516719 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:09:49.516742 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:09:49.516769 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:09:49.516787 | orchestrator | 2026-03-29 02:09:49.516804 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:09:49.516824 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:09:49.516843 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516879 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516907 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516919 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516930 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516942 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:09:49.516961 | orchestrator | 2026-03-29 02:09:49.516991 | orchestrator | 2026-03-29 02:09:49.517010 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:09:49.517028 | orchestrator | Sunday 29 March 2026 02:09:49 +0000 (0:00:04.338) 0:00:22.815 ********** 2026-03-29 02:09:49.517044 | orchestrator | =============================================================================== 2026-03-29 02:09:49.517060 | orchestrator | Install python3-docker -------------------------------------------------- 4.34s 2026-03-29 02:09:49.517078 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2026-03-29 02:09:49.517095 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2026-03-29 02:09:49.517112 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2026-03-29 02:09:49.517131 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.78s 2026-03-29 02:09:49.517149 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2026-03-29 02:09:49.517166 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2026-03-29 02:09:49.517184 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2026-03-29 02:09:49.517202 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-03-29 02:09:49.517222 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2026-03-29 02:09:49.517240 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2026-03-29 02:09:49.517277 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-29 02:09:50.197709 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-29 02:10:02.350991 | orchestrator | 2026-03-29 02:10:02 | INFO  | Task 20da6668-8242-4ef5-8771-631f6789e2f8 (reboot) was prepared for execution. 2026-03-29 02:10:02.351135 | orchestrator | 2026-03-29 02:10:02 | INFO  | It takes a moment until task 20da6668-8242-4ef5-8771-631f6789e2f8 (reboot) has been started and output is visible here. 2026-03-29 02:10:12.733435 | orchestrator | 2026-03-29 02:10:12.733517 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733524 | orchestrator | 2026-03-29 02:10:12.733529 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733533 | orchestrator | Sunday 29 March 2026 02:10:06 +0000 (0:00:00.238) 0:00:00.238 ********** 2026-03-29 02:10:12.733537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:10:12.733543 | orchestrator | 2026-03-29 02:10:12.733547 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733551 | orchestrator | Sunday 29 March 2026 02:10:06 +0000 (0:00:00.097) 0:00:00.336 ********** 2026-03-29 02:10:12.733555 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:10:12.733559 | orchestrator | 2026-03-29 02:10:12.733563 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733566 | orchestrator | Sunday 29 March 2026 02:10:07 +0000 (0:00:00.873) 0:00:01.210 ********** 2026-03-29 02:10:12.733586 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:10:12.733590 | orchestrator | 2026-03-29 02:10:12.733594 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733597 | orchestrator | 2026-03-29 02:10:12.733601 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733605 | orchestrator | Sunday 29 March 2026 02:10:07 +0000 (0:00:00.115) 0:00:01.326 ********** 2026-03-29 02:10:12.733608 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:10:12.733612 | orchestrator | 2026-03-29 02:10:12.733616 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733619 | orchestrator | Sunday 29 March 2026 02:10:07 +0000 (0:00:00.089) 0:00:01.416 ********** 2026-03-29 02:10:12.733623 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:10:12.733672 | orchestrator | 2026-03-29 02:10:12.733676 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733680 | orchestrator | Sunday 29 March 2026 02:10:08 +0000 (0:00:00.617) 0:00:02.033 ********** 2026-03-29 02:10:12.733684 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:10:12.733688 | orchestrator | 2026-03-29 02:10:12.733691 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733695 | orchestrator | 2026-03-29 02:10:12.733699 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733702 | orchestrator | Sunday 29 March 2026 02:10:08 +0000 (0:00:00.117) 0:00:02.151 ********** 2026-03-29 02:10:12.733706 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:10:12.733710 | orchestrator | 2026-03-29 02:10:12.733714 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733717 | orchestrator | Sunday 29 March 2026 02:10:08 +0000 (0:00:00.230) 0:00:02.381 ********** 2026-03-29 02:10:12.733721 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:10:12.733725 | orchestrator | 2026-03-29 02:10:12.733729 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733743 | orchestrator | Sunday 29 March 2026 02:10:09 +0000 (0:00:00.672) 0:00:03.053 ********** 2026-03-29 02:10:12.733747 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:10:12.733751 | orchestrator | 2026-03-29 02:10:12.733754 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733758 | orchestrator | 2026-03-29 02:10:12.733762 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733765 | orchestrator | Sunday 29 March 2026 02:10:09 +0000 (0:00:00.128) 0:00:03.181 ********** 2026-03-29 02:10:12.733769 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:10:12.733773 | orchestrator | 2026-03-29 02:10:12.733777 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733780 | orchestrator | Sunday 29 March 2026 02:10:09 +0000 (0:00:00.101) 0:00:03.284 ********** 2026-03-29 02:10:12.733784 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:10:12.733788 | orchestrator | 2026-03-29 02:10:12.733791 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733795 | orchestrator | Sunday 29 March 2026 02:10:10 +0000 (0:00:00.689) 0:00:03.973 ********** 2026-03-29 02:10:12.733799 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:10:12.733802 | orchestrator | 2026-03-29 02:10:12.733806 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733810 | orchestrator | 2026-03-29 02:10:12.733814 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733817 | orchestrator | Sunday 29 March 2026 02:10:10 +0000 (0:00:00.143) 0:00:04.117 ********** 2026-03-29 02:10:12.733821 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:10:12.733825 | orchestrator | 2026-03-29 02:10:12.733829 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733832 | orchestrator | Sunday 29 March 2026 02:10:10 +0000 (0:00:00.102) 0:00:04.220 ********** 2026-03-29 02:10:12.733841 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:10:12.733844 | orchestrator | 2026-03-29 02:10:12.733848 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733852 | orchestrator | Sunday 29 March 2026 02:10:11 +0000 (0:00:00.685) 0:00:04.905 ********** 2026-03-29 02:10:12.733855 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:10:12.733859 | orchestrator | 2026-03-29 02:10:12.733863 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 02:10:12.733867 | orchestrator | 2026-03-29 02:10:12.733871 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 02:10:12.733875 | orchestrator | Sunday 29 March 2026 02:10:11 +0000 (0:00:00.116) 0:00:05.022 ********** 2026-03-29 02:10:12.733878 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:10:12.733882 | orchestrator | 2026-03-29 02:10:12.733886 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 02:10:12.733889 | orchestrator | Sunday 29 March 2026 02:10:11 +0000 (0:00:00.115) 0:00:05.138 ********** 2026-03-29 02:10:12.733893 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:10:12.733897 | orchestrator | 2026-03-29 02:10:12.733901 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 02:10:12.733904 | orchestrator | Sunday 29 March 2026 02:10:12 +0000 (0:00:00.734) 0:00:05.872 ********** 2026-03-29 02:10:12.733919 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:10:12.733923 | orchestrator | 2026-03-29 02:10:12.733927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:10:12.733931 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733936 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733940 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733943 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733947 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733951 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:10:12.733954 | orchestrator | 2026-03-29 02:10:12.733958 | orchestrator | 2026-03-29 02:10:12.733962 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:10:12.733966 | orchestrator | Sunday 29 March 2026 02:10:12 +0000 (0:00:00.041) 0:00:05.913 ********** 2026-03-29 02:10:12.733970 | orchestrator | =============================================================================== 2026-03-29 02:10:12.733973 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2026-03-29 02:10:12.733977 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-03-29 02:10:12.733981 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-03-29 02:10:13.107191 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-29 02:10:25.694702 | orchestrator | 2026-03-29 02:10:25 | INFO  | Task a5def436-a5ed-42d2-9b29-67fd60263201 (wait-for-connection) was prepared for execution. 2026-03-29 02:10:25.694797 | orchestrator | 2026-03-29 02:10:25 | INFO  | It takes a moment until task a5def436-a5ed-42d2-9b29-67fd60263201 (wait-for-connection) has been started and output is visible here. 2026-03-29 02:10:41.648324 | orchestrator | 2026-03-29 02:10:41.648477 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-29 02:10:41.648539 | orchestrator | 2026-03-29 02:10:41.648564 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-29 02:10:41.648583 | orchestrator | Sunday 29 March 2026 02:10:29 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-03-29 02:10:41.648603 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:10:41.648623 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:10:41.648697 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:10:41.648709 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:10:41.648720 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:10:41.648731 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:10:41.648742 | orchestrator | 2026-03-29 02:10:41.648753 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:10:41.648764 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648777 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648788 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648799 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648810 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648821 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:10:41.648832 | orchestrator | 2026-03-29 02:10:41.648843 | orchestrator | 2026-03-29 02:10:41.648855 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:10:41.648868 | orchestrator | Sunday 29 March 2026 02:10:41 +0000 (0:00:11.557) 0:00:11.797 ********** 2026-03-29 02:10:41.648881 | orchestrator | =============================================================================== 2026-03-29 02:10:41.648894 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2026-03-29 02:10:41.837826 | orchestrator | + osism apply hddtemp 2026-03-29 02:10:53.641535 | orchestrator | 2026-03-29 02:10:53 | INFO  | Task e7896ef4-54bf-4879-b25c-0de22a2a67c5 (hddtemp) was prepared for execution. 2026-03-29 02:10:53.641625 | orchestrator | 2026-03-29 02:10:53 | INFO  | It takes a moment until task e7896ef4-54bf-4879-b25c-0de22a2a67c5 (hddtemp) has been started and output is visible here. 2026-03-29 02:11:22.656779 | orchestrator | 2026-03-29 02:11:22.656852 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-29 02:11:22.656860 | orchestrator | 2026-03-29 02:11:22.656866 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-29 02:11:22.656872 | orchestrator | Sunday 29 March 2026 02:10:57 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-03-29 02:11:22.656877 | orchestrator | ok: [testbed-manager] 2026-03-29 02:11:22.656883 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:11:22.656889 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:11:22.656894 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:11:22.656898 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:11:22.656903 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:11:22.656908 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:11:22.656913 | orchestrator | 2026-03-29 02:11:22.656918 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-29 02:11:22.656923 | orchestrator | Sunday 29 March 2026 02:10:58 +0000 (0:00:00.729) 0:00:01.004 ********** 2026-03-29 02:11:22.656929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:11:22.656949 | orchestrator | 2026-03-29 02:11:22.656955 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-29 02:11:22.656959 | orchestrator | Sunday 29 March 2026 02:10:59 +0000 (0:00:01.259) 0:00:02.264 ********** 2026-03-29 02:11:22.656964 | orchestrator | ok: [testbed-manager] 2026-03-29 02:11:22.656969 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:11:22.656974 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:11:22.656979 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:11:22.656983 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:11:22.656988 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:11:22.656993 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:11:22.656998 | orchestrator | 2026-03-29 02:11:22.657003 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-29 02:11:22.657008 | orchestrator | Sunday 29 March 2026 02:11:02 +0000 (0:00:02.152) 0:00:04.416 ********** 2026-03-29 02:11:22.657013 | orchestrator | changed: [testbed-manager] 2026-03-29 02:11:22.657019 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:11:22.657024 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:11:22.657029 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:11:22.657033 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:11:22.657038 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:11:22.657043 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:11:22.657047 | orchestrator | 2026-03-29 02:11:22.657052 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-29 02:11:22.657057 | orchestrator | Sunday 29 March 2026 02:11:03 +0000 (0:00:01.256) 0:00:05.673 ********** 2026-03-29 02:11:22.657062 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:11:22.657067 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:11:22.657071 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:11:22.657076 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:11:22.657081 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:11:22.657086 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:11:22.657091 | orchestrator | ok: [testbed-manager] 2026-03-29 02:11:22.657095 | orchestrator | 2026-03-29 02:11:22.657105 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-29 02:11:22.657110 | orchestrator | Sunday 29 March 2026 02:11:04 +0000 (0:00:01.341) 0:00:07.014 ********** 2026-03-29 02:11:22.657115 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:11:22.657120 | orchestrator | changed: [testbed-manager] 2026-03-29 02:11:22.657125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:11:22.657129 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:11:22.657134 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:11:22.657139 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:11:22.657144 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:11:22.657148 | orchestrator | 2026-03-29 02:11:22.657153 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-29 02:11:22.657158 | orchestrator | Sunday 29 March 2026 02:11:05 +0000 (0:00:00.959) 0:00:07.974 ********** 2026-03-29 02:11:22.657163 | orchestrator | changed: [testbed-manager] 2026-03-29 02:11:22.657168 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:11:22.657172 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:11:22.657177 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:11:22.657182 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:11:22.657186 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:11:22.657191 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:11:22.657196 | orchestrator | 2026-03-29 02:11:22.657201 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-29 02:11:22.657205 | orchestrator | Sunday 29 March 2026 02:11:19 +0000 (0:00:13.445) 0:00:21.420 ********** 2026-03-29 02:11:22.657210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:11:22.657215 | orchestrator | 2026-03-29 02:11:22.657220 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-29 02:11:22.657229 | orchestrator | Sunday 29 March 2026 02:11:20 +0000 (0:00:01.280) 0:00:22.700 ********** 2026-03-29 02:11:22.657234 | orchestrator | changed: [testbed-manager] 2026-03-29 02:11:22.657238 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:11:22.657243 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:11:22.657248 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:11:22.657253 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:11:22.657258 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:11:22.657263 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:11:22.657267 | orchestrator | 2026-03-29 02:11:22.657272 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:11:22.657277 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:11:22.657293 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657299 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657304 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657309 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657313 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657318 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:11:22.657323 | orchestrator | 2026-03-29 02:11:22.657328 | orchestrator | 2026-03-29 02:11:22.657333 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:11:22.657337 | orchestrator | Sunday 29 March 2026 02:11:22 +0000 (0:00:01.979) 0:00:24.679 ********** 2026-03-29 02:11:22.657342 | orchestrator | =============================================================================== 2026-03-29 02:11:22.657347 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.45s 2026-03-29 02:11:22.657352 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.15s 2026-03-29 02:11:22.657356 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.98s 2026-03-29 02:11:22.657362 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.34s 2026-03-29 02:11:22.657367 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-03-29 02:11:22.657373 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2026-03-29 02:11:22.657378 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.26s 2026-03-29 02:11:22.657384 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.96s 2026-03-29 02:11:22.657389 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-03-29 02:11:23.021205 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-29 02:11:23.084873 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 02:11:23.084969 | orchestrator | + sudo systemctl restart manager.service 2026-03-29 02:11:36.633511 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 02:11:36.633678 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 02:11:36.633706 | orchestrator | + local max_attempts=60 2026-03-29 02:11:36.633730 | orchestrator | + local name=ceph-ansible 2026-03-29 02:11:36.633762 | orchestrator | + local attempt_num=1 2026-03-29 02:11:36.633774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:11:36.670270 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:11:36.670431 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:11:36.670455 | orchestrator | + sleep 5 2026-03-29 02:11:41.674300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:11:41.698072 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:11:41.698170 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:11:41.698185 | orchestrator | + sleep 5 2026-03-29 02:11:46.701400 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:11:46.738001 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:11:46.738120 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:11:46.738129 | orchestrator | + sleep 5 2026-03-29 02:11:51.742156 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:11:51.777060 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:11:51.777163 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:11:51.777185 | orchestrator | + sleep 5 2026-03-29 02:11:56.780447 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:11:56.819720 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:11:56.819798 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:11:56.819807 | orchestrator | + sleep 5 2026-03-29 02:12:01.823827 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:01.861287 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:01.861389 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:01.861404 | orchestrator | + sleep 5 2026-03-29 02:12:06.865783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:06.908770 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:06.908884 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:06.908907 | orchestrator | + sleep 5 2026-03-29 02:12:11.914070 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:11.950470 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:11.950585 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:11.950605 | orchestrator | + sleep 5 2026-03-29 02:12:16.953829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:16.981897 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:16.981986 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:16.981993 | orchestrator | + sleep 5 2026-03-29 02:12:21.985090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:22.020759 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:22.020839 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:22.020848 | orchestrator | + sleep 5 2026-03-29 02:12:27.024007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:27.062887 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:27.062991 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:27.063011 | orchestrator | + sleep 5 2026-03-29 02:12:32.068208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:32.108149 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:32.108249 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:32.108273 | orchestrator | + sleep 5 2026-03-29 02:12:37.114462 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:37.148796 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:37.148906 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 02:12:37.148921 | orchestrator | + sleep 5 2026-03-29 02:12:42.153415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 02:12:42.194393 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:42.194490 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 02:12:42.194507 | orchestrator | + local max_attempts=60 2026-03-29 02:12:42.194519 | orchestrator | + local name=kolla-ansible 2026-03-29 02:12:42.194530 | orchestrator | + local attempt_num=1 2026-03-29 02:12:42.194542 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 02:12:42.231908 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:42.232005 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 02:12:42.232019 | orchestrator | + local max_attempts=60 2026-03-29 02:12:42.232030 | orchestrator | + local name=osism-ansible 2026-03-29 02:12:42.232067 | orchestrator | + local attempt_num=1 2026-03-29 02:12:42.232086 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 02:12:42.261901 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 02:12:42.262124 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 02:12:42.262157 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 02:12:42.431254 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-29 02:12:42.574429 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-29 02:12:42.726950 | orchestrator | ARA in osism-ansible already disabled. 2026-03-29 02:12:42.894846 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-29 02:12:42.895350 | orchestrator | + osism apply gather-facts 2026-03-29 02:12:54.771802 | orchestrator | 2026-03-29 02:12:54 | INFO  | Task 6e92e275-04f3-4db1-ac6a-fc5067ecc878 (gather-facts) was prepared for execution. 2026-03-29 02:12:54.771904 | orchestrator | 2026-03-29 02:12:54 | INFO  | It takes a moment until task 6e92e275-04f3-4db1-ac6a-fc5067ecc878 (gather-facts) has been started and output is visible here. 2026-03-29 02:13:08.283829 | orchestrator | 2026-03-29 02:13:08.283959 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 02:13:08.283983 | orchestrator | 2026-03-29 02:13:08.284001 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 02:13:08.284014 | orchestrator | Sunday 29 March 2026 02:12:58 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-03-29 02:13:08.284023 | orchestrator | ok: [testbed-manager] 2026-03-29 02:13:08.284033 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:13:08.284043 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:13:08.284065 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:13:08.284074 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:13:08.284083 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:13:08.284092 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:13:08.284100 | orchestrator | 2026-03-29 02:13:08.284109 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 02:13:08.284118 | orchestrator | 2026-03-29 02:13:08.284127 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 02:13:08.284136 | orchestrator | Sunday 29 March 2026 02:13:07 +0000 (0:00:08.615) 0:00:08.810 ********** 2026-03-29 02:13:08.284145 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:13:08.284155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:13:08.284163 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:13:08.284172 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:13:08.284180 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:13:08.284202 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:13:08.284219 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:13:08.284228 | orchestrator | 2026-03-29 02:13:08.284237 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:13:08.284246 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284256 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284265 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284274 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284283 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284292 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284300 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 02:13:08.284334 | orchestrator | 2026-03-29 02:13:08.284343 | orchestrator | 2026-03-29 02:13:08.284352 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:13:08.284360 | orchestrator | Sunday 29 March 2026 02:13:08 +0000 (0:00:00.457) 0:00:09.268 ********** 2026-03-29 02:13:08.284369 | orchestrator | =============================================================================== 2026-03-29 02:13:08.284378 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.62s 2026-03-29 02:13:08.284387 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-29 02:13:08.483211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-29 02:13:08.493261 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-29 02:13:08.505226 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-29 02:13:08.524538 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-29 02:13:08.538113 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-29 02:13:08.555344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-29 02:13:08.564768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-29 02:13:08.574633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-29 02:13:08.584383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-29 02:13:08.593352 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-29 02:13:08.602077 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-29 02:13:08.619712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-29 02:13:08.629069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-29 02:13:08.643130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-29 02:13:08.653105 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-29 02:13:08.662953 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-29 02:13:08.674538 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-29 02:13:08.692962 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-29 02:13:08.707883 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-29 02:13:08.726063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-29 02:13:08.736483 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-29 02:13:08.744455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-29 02:13:08.761325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-29 02:13:08.775362 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-29 02:13:09.090478 | orchestrator | ok: Runtime: 0:24:37.038218 2026-03-29 02:13:09.185685 | 2026-03-29 02:13:09.185843 | TASK [Deploy services] 2026-03-29 02:13:09.979082 | orchestrator | 2026-03-29 02:13:09.979275 | orchestrator | # DEPLOY SERVICES 2026-03-29 02:13:09.979306 | orchestrator | 2026-03-29 02:13:09.979322 | orchestrator | + set -e 2026-03-29 02:13:09.979338 | orchestrator | + echo 2026-03-29 02:13:09.979354 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-29 02:13:09.979372 | orchestrator | + echo 2026-03-29 02:13:09.979418 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 02:13:09.979437 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 02:13:09.979449 | orchestrator | ++ INTERACTIVE=false 2026-03-29 02:13:09.979459 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 02:13:09.979476 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 02:13:09.979484 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 02:13:09.979495 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 02:13:09.979503 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 02:13:09.979517 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 02:13:09.979525 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 02:13:09.979537 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 02:13:09.979545 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 02:13:09.979556 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 02:13:09.979565 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 02:13:09.979573 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 02:13:09.979582 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 02:13:09.979590 | orchestrator | ++ export ARA=false 2026-03-29 02:13:09.979598 | orchestrator | ++ ARA=false 2026-03-29 02:13:09.979606 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 02:13:09.979614 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 02:13:09.979622 | orchestrator | ++ export TEMPEST=false 2026-03-29 02:13:09.979630 | orchestrator | ++ TEMPEST=false 2026-03-29 02:13:09.979638 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 02:13:09.979646 | orchestrator | ++ IS_ZUUL=true 2026-03-29 02:13:09.979681 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:13:09.979689 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:13:09.979697 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 02:13:09.979705 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 02:13:09.979713 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 02:13:09.979721 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 02:13:09.979729 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 02:13:09.979737 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 02:13:09.979745 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 02:13:09.979760 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 02:13:09.979768 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-29 02:13:09.989092 | orchestrator | 2026-03-29 02:13:09.989193 | orchestrator | # PULL IMAGES 2026-03-29 02:13:09.989211 | orchestrator | 2026-03-29 02:13:09.989225 | orchestrator | + set -e 2026-03-29 02:13:09.989240 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 02:13:09.989256 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 02:13:09.989270 | orchestrator | ++ INTERACTIVE=false 2026-03-29 02:13:09.989283 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 02:13:09.989296 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 02:13:09.989310 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 02:13:09.989323 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 02:13:09.989337 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 02:13:09.989350 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 02:13:09.989362 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 02:13:09.989375 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 02:13:09.989389 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 02:13:09.989403 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 02:13:09.989416 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 02:13:09.989430 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 02:13:09.989443 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 02:13:09.989456 | orchestrator | ++ export ARA=false 2026-03-29 02:13:09.989468 | orchestrator | ++ ARA=false 2026-03-29 02:13:09.989485 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 02:13:09.989500 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 02:13:09.989513 | orchestrator | ++ export TEMPEST=false 2026-03-29 02:13:09.989525 | orchestrator | ++ TEMPEST=false 2026-03-29 02:13:09.989540 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 02:13:09.989553 | orchestrator | ++ IS_ZUUL=true 2026-03-29 02:13:09.989566 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:13:09.989579 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:13:09.989591 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 02:13:09.989605 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 02:13:09.989618 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 02:13:09.989631 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 02:13:09.989704 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 02:13:09.989721 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 02:13:09.989736 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 02:13:09.989750 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 02:13:09.989763 | orchestrator | + echo 2026-03-29 02:13:09.989777 | orchestrator | + echo '# PULL IMAGES' 2026-03-29 02:13:09.989791 | orchestrator | + echo 2026-03-29 02:13:09.991538 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 02:13:10.035860 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 02:13:10.035956 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-29 02:13:11.927265 | orchestrator | 2026-03-29 02:13:11 | INFO  | Trying to run play pull-images in environment custom 2026-03-29 02:13:22.031734 | orchestrator | 2026-03-29 02:13:22 | INFO  | Task 6d09bebc-bd7d-4b24-bdbe-48ca14fcd256 (pull-images) was prepared for execution. 2026-03-29 02:13:22.031868 | orchestrator | 2026-03-29 02:13:22 | INFO  | Task 6d09bebc-bd7d-4b24-bdbe-48ca14fcd256 is running in background. No more output. Check ARA for logs. 2026-03-29 02:13:22.383923 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-29 02:13:34.625853 | orchestrator | 2026-03-29 02:13:34 | INFO  | Task 5f084b3c-a743-47d7-bb96-c8b4c42130bf (cgit) was prepared for execution. 2026-03-29 02:13:34.625966 | orchestrator | 2026-03-29 02:13:34 | INFO  | Task 5f084b3c-a743-47d7-bb96-c8b4c42130bf is running in background. No more output. Check ARA for logs. 2026-03-29 02:13:47.371182 | orchestrator | 2026-03-29 02:13:47 | INFO  | Task 52851e1d-4005-4828-a566-61ff4d6bae58 (dotfiles) was prepared for execution. 2026-03-29 02:13:47.371298 | orchestrator | 2026-03-29 02:13:47 | INFO  | Task 52851e1d-4005-4828-a566-61ff4d6bae58 is running in background. No more output. Check ARA for logs. 2026-03-29 02:13:59.883302 | orchestrator | 2026-03-29 02:13:59 | INFO  | Task a2bb496a-2c70-4379-9ae5-02da9c28850c (homer) was prepared for execution. 2026-03-29 02:13:59.883407 | orchestrator | 2026-03-29 02:13:59 | INFO  | Task a2bb496a-2c70-4379-9ae5-02da9c28850c is running in background. No more output. Check ARA for logs. 2026-03-29 02:14:12.249235 | orchestrator | 2026-03-29 02:14:12 | INFO  | Task c9131183-144d-44ae-a896-9bf2387df970 (phpmyadmin) was prepared for execution. 2026-03-29 02:14:12.249814 | orchestrator | 2026-03-29 02:14:12 | INFO  | Task c9131183-144d-44ae-a896-9bf2387df970 is running in background. No more output. Check ARA for logs. 2026-03-29 02:14:24.727203 | orchestrator | 2026-03-29 02:14:24 | INFO  | Task d398ff13-d996-4071-808d-a7ebb51feac8 (sosreport) was prepared for execution. 2026-03-29 02:14:24.727305 | orchestrator | 2026-03-29 02:14:24 | INFO  | Task d398ff13-d996-4071-808d-a7ebb51feac8 is running in background. No more output. Check ARA for logs. 2026-03-29 02:14:25.039145 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-29 02:14:25.047575 | orchestrator | + set -e 2026-03-29 02:14:25.047647 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 02:14:25.047657 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 02:14:25.047665 | orchestrator | ++ INTERACTIVE=false 2026-03-29 02:14:25.047702 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 02:14:25.047709 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 02:14:25.047716 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 02:14:25.047722 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 02:14:25.047728 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 02:14:25.047735 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 02:14:25.047741 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 02:14:25.047747 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 02:14:25.047754 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 02:14:25.047761 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 02:14:25.047767 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 02:14:25.047774 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 02:14:25.047780 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 02:14:25.047786 | orchestrator | ++ export ARA=false 2026-03-29 02:14:25.047793 | orchestrator | ++ ARA=false 2026-03-29 02:14:25.047799 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 02:14:25.047829 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 02:14:25.047836 | orchestrator | ++ export TEMPEST=false 2026-03-29 02:14:25.047842 | orchestrator | ++ TEMPEST=false 2026-03-29 02:14:25.047848 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 02:14:25.047854 | orchestrator | ++ IS_ZUUL=true 2026-03-29 02:14:25.047873 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:14:25.047883 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 02:14:25.047890 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 02:14:25.047896 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 02:14:25.047902 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 02:14:25.047908 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 02:14:25.047914 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 02:14:25.047921 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 02:14:25.047927 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 02:14:25.047933 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 02:14:25.048351 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-29 02:14:25.089331 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 02:14:25.089397 | orchestrator | + osism apply frr 2026-03-29 02:14:37.479148 | orchestrator | 2026-03-29 02:14:37 | INFO  | Task 0ff8e230-1922-4be1-a6bd-ac5dc8c8b976 (frr) was prepared for execution. 2026-03-29 02:14:37.479289 | orchestrator | 2026-03-29 02:14:37 | INFO  | It takes a moment until task 0ff8e230-1922-4be1-a6bd-ac5dc8c8b976 (frr) has been started and output is visible here. 2026-03-29 02:15:06.499837 | orchestrator | 2026-03-29 02:15:06.499932 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-29 02:15:06.499941 | orchestrator | 2026-03-29 02:15:06.499946 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-29 02:15:06.499956 | orchestrator | Sunday 29 March 2026 02:14:43 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-03-29 02:15:06.499960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 02:15:06.499966 | orchestrator | 2026-03-29 02:15:06.499970 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-29 02:15:06.499975 | orchestrator | Sunday 29 March 2026 02:14:44 +0000 (0:00:00.202) 0:00:00.391 ********** 2026-03-29 02:15:06.499981 | orchestrator | changed: [testbed-manager] 2026-03-29 02:15:06.499989 | orchestrator | 2026-03-29 02:15:06.499995 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-29 02:15:06.500004 | orchestrator | Sunday 29 March 2026 02:14:45 +0000 (0:00:01.100) 0:00:01.491 ********** 2026-03-29 02:15:06.500013 | orchestrator | changed: [testbed-manager] 2026-03-29 02:15:06.500019 | orchestrator | 2026-03-29 02:15:06.500026 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-29 02:15:06.500032 | orchestrator | Sunday 29 March 2026 02:14:56 +0000 (0:00:11.177) 0:00:12.669 ********** 2026-03-29 02:15:06.500038 | orchestrator | ok: [testbed-manager] 2026-03-29 02:15:06.500045 | orchestrator | 2026-03-29 02:15:06.500051 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-29 02:15:06.500058 | orchestrator | Sunday 29 March 2026 02:14:57 +0000 (0:00:01.099) 0:00:13.769 ********** 2026-03-29 02:15:06.500064 | orchestrator | changed: [testbed-manager] 2026-03-29 02:15:06.500070 | orchestrator | 2026-03-29 02:15:06.500076 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-29 02:15:06.500083 | orchestrator | Sunday 29 March 2026 02:14:58 +0000 (0:00:01.209) 0:00:14.978 ********** 2026-03-29 02:15:06.500089 | orchestrator | ok: [testbed-manager] 2026-03-29 02:15:06.500094 | orchestrator | 2026-03-29 02:15:06.500100 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-29 02:15:06.500108 | orchestrator | Sunday 29 March 2026 02:15:00 +0000 (0:00:01.250) 0:00:16.228 ********** 2026-03-29 02:15:06.500115 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:15:06.500121 | orchestrator | 2026-03-29 02:15:06.500127 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-29 02:15:06.500133 | orchestrator | Sunday 29 March 2026 02:15:00 +0000 (0:00:00.137) 0:00:16.365 ********** 2026-03-29 02:15:06.500162 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:15:06.500171 | orchestrator | 2026-03-29 02:15:06.500178 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-29 02:15:06.500184 | orchestrator | Sunday 29 March 2026 02:15:00 +0000 (0:00:00.159) 0:00:16.525 ********** 2026-03-29 02:15:06.500190 | orchestrator | changed: [testbed-manager] 2026-03-29 02:15:06.500196 | orchestrator | 2026-03-29 02:15:06.500202 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-29 02:15:06.500208 | orchestrator | Sunday 29 March 2026 02:15:01 +0000 (0:00:00.973) 0:00:17.499 ********** 2026-03-29 02:15:06.500215 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-29 02:15:06.500222 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-29 02:15:06.500232 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-29 02:15:06.500239 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-29 02:15:06.500247 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-29 02:15:06.500265 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-29 02:15:06.500271 | orchestrator | 2026-03-29 02:15:06.500278 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-29 02:15:06.500284 | orchestrator | Sunday 29 March 2026 02:15:03 +0000 (0:00:02.399) 0:00:19.899 ********** 2026-03-29 02:15:06.500290 | orchestrator | ok: [testbed-manager] 2026-03-29 02:15:06.500296 | orchestrator | 2026-03-29 02:15:06.500301 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-29 02:15:06.500308 | orchestrator | Sunday 29 March 2026 02:15:04 +0000 (0:00:01.304) 0:00:21.203 ********** 2026-03-29 02:15:06.500313 | orchestrator | changed: [testbed-manager] 2026-03-29 02:15:06.500319 | orchestrator | 2026-03-29 02:15:06.500325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:15:06.500333 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:15:06.500354 | orchestrator | 2026-03-29 02:15:06.500362 | orchestrator | 2026-03-29 02:15:06.500376 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:15:06.500383 | orchestrator | Sunday 29 March 2026 02:15:06 +0000 (0:00:01.301) 0:00:22.505 ********** 2026-03-29 02:15:06.500395 | orchestrator | =============================================================================== 2026-03-29 02:15:06.500407 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.18s 2026-03-29 02:15:06.500413 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.40s 2026-03-29 02:15:06.500420 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.30s 2026-03-29 02:15:06.500426 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.30s 2026-03-29 02:15:06.500432 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.25s 2026-03-29 02:15:06.500458 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.21s 2026-03-29 02:15:06.500468 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.10s 2026-03-29 02:15:06.500474 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2026-03-29 02:15:06.500484 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-03-29 02:15:06.500492 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-29 02:15:06.500499 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-03-29 02:15:06.500506 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-29 02:15:06.754794 | orchestrator | + osism apply kubernetes 2026-03-29 02:15:08.677048 | orchestrator | 2026-03-29 02:15:08 | INFO  | Task a9095822-071a-41cd-b07b-1fa8cb396d6c (kubernetes) was prepared for execution. 2026-03-29 02:15:08.677124 | orchestrator | 2026-03-29 02:15:08 | INFO  | It takes a moment until task a9095822-071a-41cd-b07b-1fa8cb396d6c (kubernetes) has been started and output is visible here. 2026-03-29 02:15:34.172213 | orchestrator | 2026-03-29 02:15:34.172329 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-29 02:15:34.172340 | orchestrator | 2026-03-29 02:15:34.172347 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-29 02:15:34.172356 | orchestrator | Sunday 29 March 2026 02:15:13 +0000 (0:00:00.164) 0:00:00.164 ********** 2026-03-29 02:15:34.172363 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:15:34.172371 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:15:34.172377 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:15:34.172384 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:15:34.172390 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:15:34.172396 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:15:34.172404 | orchestrator | 2026-03-29 02:15:34.172454 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-29 02:15:34.172464 | orchestrator | Sunday 29 March 2026 02:15:14 +0000 (0:00:00.731) 0:00:00.895 ********** 2026-03-29 02:15:34.172471 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172479 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172486 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172492 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.172498 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.172505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.172511 | orchestrator | 2026-03-29 02:15:34.172518 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-29 02:15:34.172526 | orchestrator | Sunday 29 March 2026 02:15:14 +0000 (0:00:00.667) 0:00:01.563 ********** 2026-03-29 02:15:34.172532 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172538 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172545 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172551 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.172557 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.172563 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.172570 | orchestrator | 2026-03-29 02:15:34.172576 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-29 02:15:34.172582 | orchestrator | Sunday 29 March 2026 02:15:15 +0000 (0:00:00.771) 0:00:02.335 ********** 2026-03-29 02:15:34.172589 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:15:34.172595 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:15:34.172601 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:15:34.172611 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:15:34.172617 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:15:34.172624 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:15:34.172630 | orchestrator | 2026-03-29 02:15:34.172637 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-29 02:15:34.172643 | orchestrator | Sunday 29 March 2026 02:15:17 +0000 (0:00:02.233) 0:00:04.569 ********** 2026-03-29 02:15:34.172651 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:15:34.172655 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:15:34.172660 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:15:34.172667 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:15:34.172673 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:15:34.172679 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:15:34.172725 | orchestrator | 2026-03-29 02:15:34.172732 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-29 02:15:34.172740 | orchestrator | Sunday 29 March 2026 02:15:19 +0000 (0:00:01.750) 0:00:06.319 ********** 2026-03-29 02:15:34.172744 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:15:34.172762 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:15:34.172766 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:15:34.172771 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:15:34.172776 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:15:34.172782 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:15:34.172788 | orchestrator | 2026-03-29 02:15:34.172800 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-29 02:15:34.172807 | orchestrator | Sunday 29 March 2026 02:15:21 +0000 (0:00:02.019) 0:00:08.339 ********** 2026-03-29 02:15:34.172814 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172821 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172827 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172833 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.172836 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.172840 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.172844 | orchestrator | 2026-03-29 02:15:34.172848 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-29 02:15:34.172851 | orchestrator | Sunday 29 March 2026 02:15:22 +0000 (0:00:00.604) 0:00:08.943 ********** 2026-03-29 02:15:34.172855 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172859 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172863 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172866 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.172870 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.172874 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.172877 | orchestrator | 2026-03-29 02:15:34.172881 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-29 02:15:34.172885 | orchestrator | Sunday 29 March 2026 02:15:22 +0000 (0:00:00.822) 0:00:09.766 ********** 2026-03-29 02:15:34.172889 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172892 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172896 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172900 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172903 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172911 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172915 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172918 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172922 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172938 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172942 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.172946 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172950 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.172957 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 02:15:34.172961 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 02:15:34.172965 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.172969 | orchestrator | 2026-03-29 02:15:34.172972 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-29 02:15:34.172976 | orchestrator | Sunday 29 March 2026 02:15:23 +0000 (0:00:00.655) 0:00:10.422 ********** 2026-03-29 02:15:34.172980 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.172983 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.172987 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.172996 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.173000 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.173004 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.173007 | orchestrator | 2026-03-29 02:15:34.173011 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-29 02:15:34.173016 | orchestrator | Sunday 29 March 2026 02:15:24 +0000 (0:00:01.189) 0:00:11.611 ********** 2026-03-29 02:15:34.173019 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:15:34.173023 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:15:34.173027 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:15:34.173030 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:15:34.173034 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:15:34.173038 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:15:34.173041 | orchestrator | 2026-03-29 02:15:34.173045 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-29 02:15:34.173049 | orchestrator | Sunday 29 March 2026 02:15:25 +0000 (0:00:00.858) 0:00:12.470 ********** 2026-03-29 02:15:34.173053 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:15:34.173056 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:15:34.173060 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:15:34.173064 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:15:34.173067 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:15:34.173073 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:15:34.173079 | orchestrator | 2026-03-29 02:15:34.173086 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-29 02:15:34.173091 | orchestrator | Sunday 29 March 2026 02:15:31 +0000 (0:00:05.489) 0:00:17.960 ********** 2026-03-29 02:15:34.173097 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.173106 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.173113 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.173119 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.173125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.173131 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.173138 | orchestrator | 2026-03-29 02:15:34.173144 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-29 02:15:34.173150 | orchestrator | Sunday 29 March 2026 02:15:31 +0000 (0:00:00.655) 0:00:18.615 ********** 2026-03-29 02:15:34.173156 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.173159 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.173163 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.173167 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.173171 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.173174 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.173178 | orchestrator | 2026-03-29 02:15:34.173182 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-29 02:15:34.173187 | orchestrator | Sunday 29 March 2026 02:15:32 +0000 (0:00:00.965) 0:00:19.581 ********** 2026-03-29 02:15:34.173191 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.173194 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.173198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.173202 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.173205 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.173209 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.173213 | orchestrator | 2026-03-29 02:15:34.173216 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-29 02:15:34.173220 | orchestrator | Sunday 29 March 2026 02:15:33 +0000 (0:00:00.571) 0:00:20.152 ********** 2026-03-29 02:15:34.173224 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-29 02:15:34.173232 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-29 02:15:34.173236 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:15:34.173240 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-29 02:15:34.173249 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-29 02:15:34.173253 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:15:34.173256 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-29 02:15:34.173260 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-29 02:15:34.173263 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:15:34.173267 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-29 02:15:34.173271 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-29 02:15:34.173274 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:15:34.173278 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-29 02:15:34.173282 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-29 02:15:34.173285 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:15:34.173289 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-29 02:15:34.173293 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-29 02:15:34.173297 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:15:34.173300 | orchestrator | 2026-03-29 02:15:34.173304 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-29 02:15:34.173311 | orchestrator | Sunday 29 March 2026 02:15:34 +0000 (0:00:00.821) 0:00:20.974 ********** 2026-03-29 02:16:48.893055 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:16:48.893140 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:16:48.893148 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:16:48.893155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.893161 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893166 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893172 | orchestrator | 2026-03-29 02:16:48.893180 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-29 02:16:48.893188 | orchestrator | Sunday 29 March 2026 02:15:34 +0000 (0:00:00.598) 0:00:21.573 ********** 2026-03-29 02:16:48.893194 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:16:48.893199 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:16:48.893204 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:16:48.893210 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.893216 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893221 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893226 | orchestrator | 2026-03-29 02:16:48.893232 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-29 02:16:48.893237 | orchestrator | 2026-03-29 02:16:48.893243 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-29 02:16:48.893249 | orchestrator | Sunday 29 March 2026 02:15:35 +0000 (0:00:01.172) 0:00:22.746 ********** 2026-03-29 02:16:48.893254 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893261 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893266 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893271 | orchestrator | 2026-03-29 02:16:48.893277 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-29 02:16:48.893282 | orchestrator | Sunday 29 March 2026 02:15:36 +0000 (0:00:00.925) 0:00:23.671 ********** 2026-03-29 02:16:48.893288 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893293 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893298 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893304 | orchestrator | 2026-03-29 02:16:48.893309 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-29 02:16:48.893314 | orchestrator | Sunday 29 March 2026 02:15:38 +0000 (0:00:01.389) 0:00:25.060 ********** 2026-03-29 02:16:48.893320 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893325 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893330 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893336 | orchestrator | 2026-03-29 02:16:48.893342 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-29 02:16:48.893347 | orchestrator | Sunday 29 March 2026 02:15:39 +0000 (0:00:00.976) 0:00:26.036 ********** 2026-03-29 02:16:48.893370 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893376 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893381 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893387 | orchestrator | 2026-03-29 02:16:48.893392 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-29 02:16:48.893397 | orchestrator | Sunday 29 March 2026 02:15:40 +0000 (0:00:00.801) 0:00:26.838 ********** 2026-03-29 02:16:48.893403 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.893408 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893414 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893419 | orchestrator | 2026-03-29 02:16:48.893424 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-29 02:16:48.893442 | orchestrator | Sunday 29 March 2026 02:15:40 +0000 (0:00:00.420) 0:00:27.259 ********** 2026-03-29 02:16:48.893448 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893453 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:16:48.893458 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:16:48.893463 | orchestrator | 2026-03-29 02:16:48.893469 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-29 02:16:48.893474 | orchestrator | Sunday 29 March 2026 02:15:41 +0000 (0:00:01.201) 0:00:28.461 ********** 2026-03-29 02:16:48.893479 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:16:48.893485 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:16:48.893490 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893495 | orchestrator | 2026-03-29 02:16:48.893501 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-29 02:16:48.893506 | orchestrator | Sunday 29 March 2026 02:15:43 +0000 (0:00:01.530) 0:00:29.991 ********** 2026-03-29 02:16:48.893512 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:16:48.893517 | orchestrator | 2026-03-29 02:16:48.893522 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-29 02:16:48.893528 | orchestrator | Sunday 29 March 2026 02:15:43 +0000 (0:00:00.526) 0:00:30.518 ********** 2026-03-29 02:16:48.893533 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893538 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893544 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893549 | orchestrator | 2026-03-29 02:16:48.893555 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-29 02:16:48.893560 | orchestrator | Sunday 29 March 2026 02:15:46 +0000 (0:00:02.312) 0:00:32.831 ********** 2026-03-29 02:16:48.893565 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893571 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893576 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893581 | orchestrator | 2026-03-29 02:16:48.893590 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-29 02:16:48.893599 | orchestrator | Sunday 29 March 2026 02:15:46 +0000 (0:00:00.568) 0:00:33.399 ********** 2026-03-29 02:16:48.893607 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893622 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893636 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893644 | orchestrator | 2026-03-29 02:16:48.893653 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-29 02:16:48.893661 | orchestrator | Sunday 29 March 2026 02:15:47 +0000 (0:00:01.006) 0:00:34.406 ********** 2026-03-29 02:16:48.893670 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893679 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893687 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893696 | orchestrator | 2026-03-29 02:16:48.893767 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-29 02:16:48.893793 | orchestrator | Sunday 29 March 2026 02:15:48 +0000 (0:00:01.134) 0:00:35.540 ********** 2026-03-29 02:16:48.893804 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.893824 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893835 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893841 | orchestrator | 2026-03-29 02:16:48.893847 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-29 02:16:48.893853 | orchestrator | Sunday 29 March 2026 02:15:49 +0000 (0:00:00.437) 0:00:35.978 ********** 2026-03-29 02:16:48.893860 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.893866 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.893872 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.893878 | orchestrator | 2026-03-29 02:16:48.893884 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-29 02:16:48.893891 | orchestrator | Sunday 29 March 2026 02:15:49 +0000 (0:00:00.267) 0:00:36.246 ********** 2026-03-29 02:16:48.893897 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:16:48.893904 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:16:48.893910 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:16:48.893916 | orchestrator | 2026-03-29 02:16:48.893928 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-29 02:16:48.893934 | orchestrator | Sunday 29 March 2026 02:15:50 +0000 (0:00:01.013) 0:00:37.260 ********** 2026-03-29 02:16:48.893941 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893946 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893951 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893957 | orchestrator | 2026-03-29 02:16:48.893962 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-29 02:16:48.893968 | orchestrator | Sunday 29 March 2026 02:15:53 +0000 (0:00:02.933) 0:00:40.193 ********** 2026-03-29 02:16:48.893973 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.893979 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.893984 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.893992 | orchestrator | 2026-03-29 02:16:48.893998 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-29 02:16:48.894003 | orchestrator | Sunday 29 March 2026 02:15:53 +0000 (0:00:00.342) 0:00:40.536 ********** 2026-03-29 02:16:48.894009 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 02:16:48.894059 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 02:16:48.894065 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 02:16:48.894070 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 02:16:48.894076 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 02:16:48.894081 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 02:16:48.894087 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 02:16:48.894092 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 02:16:48.894097 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 02:16:48.894103 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 02:16:48.894108 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 02:16:48.894118 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 02:16:48.894124 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-29 02:16:48.894129 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-29 02:16:48.894135 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-29 02:16:48.894140 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:16:48.894145 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:16:48.894151 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:16:48.894156 | orchestrator | 2026-03-29 02:16:48.894166 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-29 02:16:48.894172 | orchestrator | Sunday 29 March 2026 02:16:47 +0000 (0:00:53.866) 0:01:34.402 ********** 2026-03-29 02:16:48.894177 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:16:48.894183 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:16:48.894188 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:16:48.894193 | orchestrator | 2026-03-29 02:16:48.894199 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-29 02:16:48.894204 | orchestrator | Sunday 29 March 2026 02:16:47 +0000 (0:00:00.291) 0:01:34.694 ********** 2026-03-29 02:16:48.894215 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.081302 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.081460 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.081480 | orchestrator | 2026-03-29 02:17:31.081494 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-29 02:17:31.081507 | orchestrator | Sunday 29 March 2026 02:16:48 +0000 (0:00:01.007) 0:01:35.702 ********** 2026-03-29 02:17:31.081518 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.081529 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.081540 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.081551 | orchestrator | 2026-03-29 02:17:31.081562 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-29 02:17:31.081573 | orchestrator | Sunday 29 March 2026 02:16:50 +0000 (0:00:01.191) 0:01:36.894 ********** 2026-03-29 02:17:31.081600 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.081622 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.081634 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.081645 | orchestrator | 2026-03-29 02:17:31.081656 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-29 02:17:31.081667 | orchestrator | Sunday 29 March 2026 02:17:16 +0000 (0:00:26.426) 0:02:03.321 ********** 2026-03-29 02:17:31.081678 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.081689 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.081700 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.081769 | orchestrator | 2026-03-29 02:17:31.081784 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-29 02:17:31.081796 | orchestrator | Sunday 29 March 2026 02:17:17 +0000 (0:00:00.600) 0:02:03.921 ********** 2026-03-29 02:17:31.081807 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.081820 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.081833 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.081845 | orchestrator | 2026-03-29 02:17:31.081858 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-29 02:17:31.081871 | orchestrator | Sunday 29 March 2026 02:17:17 +0000 (0:00:00.657) 0:02:04.579 ********** 2026-03-29 02:17:31.081884 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.081896 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.081909 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.081921 | orchestrator | 2026-03-29 02:17:31.081934 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-29 02:17:31.081976 | orchestrator | Sunday 29 March 2026 02:17:18 +0000 (0:00:00.608) 0:02:05.187 ********** 2026-03-29 02:17:31.081989 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.082002 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.082086 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.082109 | orchestrator | 2026-03-29 02:17:31.082129 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-29 02:17:31.082148 | orchestrator | Sunday 29 March 2026 02:17:19 +0000 (0:00:00.790) 0:02:05.978 ********** 2026-03-29 02:17:31.082169 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.082188 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.082204 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.082215 | orchestrator | 2026-03-29 02:17:31.082226 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-29 02:17:31.082237 | orchestrator | Sunday 29 March 2026 02:17:19 +0000 (0:00:00.295) 0:02:06.273 ********** 2026-03-29 02:17:31.082247 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.082258 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.082269 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.082279 | orchestrator | 2026-03-29 02:17:31.082290 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-29 02:17:31.082301 | orchestrator | Sunday 29 March 2026 02:17:20 +0000 (0:00:00.637) 0:02:06.910 ********** 2026-03-29 02:17:31.082312 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.082322 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.082333 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.082344 | orchestrator | 2026-03-29 02:17:31.082355 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-29 02:17:31.082366 | orchestrator | Sunday 29 March 2026 02:17:20 +0000 (0:00:00.611) 0:02:07.522 ********** 2026-03-29 02:17:31.082376 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.082387 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.082397 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.082408 | orchestrator | 2026-03-29 02:17:31.082419 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-29 02:17:31.082430 | orchestrator | Sunday 29 March 2026 02:17:21 +0000 (0:00:00.910) 0:02:08.433 ********** 2026-03-29 02:17:31.082443 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:17:31.082454 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:17:31.082465 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:17:31.082475 | orchestrator | 2026-03-29 02:17:31.082486 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-29 02:17:31.082497 | orchestrator | Sunday 29 March 2026 02:17:22 +0000 (0:00:01.079) 0:02:09.513 ********** 2026-03-29 02:17:31.082514 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:17:31.082532 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:17:31.082548 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:17:31.082565 | orchestrator | 2026-03-29 02:17:31.082585 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-29 02:17:31.082603 | orchestrator | Sunday 29 March 2026 02:17:22 +0000 (0:00:00.285) 0:02:09.799 ********** 2026-03-29 02:17:31.082620 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:17:31.082638 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:17:31.082657 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:17:31.082676 | orchestrator | 2026-03-29 02:17:31.082695 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-29 02:17:31.082783 | orchestrator | Sunday 29 March 2026 02:17:23 +0000 (0:00:00.285) 0:02:10.084 ********** 2026-03-29 02:17:31.082808 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.082827 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.082842 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.082853 | orchestrator | 2026-03-29 02:17:31.082864 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-29 02:17:31.082875 | orchestrator | Sunday 29 March 2026 02:17:23 +0000 (0:00:00.644) 0:02:10.728 ********** 2026-03-29 02:17:31.082904 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:17:31.082915 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:17:31.082948 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:17:31.082959 | orchestrator | 2026-03-29 02:17:31.082971 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-29 02:17:31.082983 | orchestrator | Sunday 29 March 2026 02:17:24 +0000 (0:00:01.071) 0:02:11.800 ********** 2026-03-29 02:17:31.082995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 02:17:31.083006 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 02:17:31.083017 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 02:17:31.083028 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 02:17:31.083039 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 02:17:31.083049 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 02:17:31.083060 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 02:17:31.083072 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 02:17:31.083083 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 02:17:31.083094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-29 02:17:31.083105 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 02:17:31.083116 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 02:17:31.083127 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-29 02:17:31.083138 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 02:17:31.083148 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 02:17:31.083159 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 02:17:31.083170 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 02:17:31.083181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 02:17:31.083192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 02:17:31.083203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 02:17:31.083214 | orchestrator | 2026-03-29 02:17:31.083225 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-29 02:17:31.083235 | orchestrator | 2026-03-29 02:17:31.083246 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-29 02:17:31.083257 | orchestrator | Sunday 29 March 2026 02:17:28 +0000 (0:00:03.117) 0:02:14.918 ********** 2026-03-29 02:17:31.083266 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:17:31.083276 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:17:31.083286 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:17:31.083295 | orchestrator | 2026-03-29 02:17:31.083323 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-29 02:17:31.083333 | orchestrator | Sunday 29 March 2026 02:17:28 +0000 (0:00:00.325) 0:02:15.243 ********** 2026-03-29 02:17:31.083342 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:17:31.083352 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:17:31.083361 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:17:31.083377 | orchestrator | 2026-03-29 02:17:31.083387 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-29 02:17:31.083396 | orchestrator | Sunday 29 March 2026 02:17:29 +0000 (0:00:00.855) 0:02:16.099 ********** 2026-03-29 02:17:31.083406 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:17:31.083415 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:17:31.083425 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:17:31.083434 | orchestrator | 2026-03-29 02:17:31.083444 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-29 02:17:31.083454 | orchestrator | Sunday 29 March 2026 02:17:29 +0000 (0:00:00.303) 0:02:16.403 ********** 2026-03-29 02:17:31.083463 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:17:31.083473 | orchestrator | 2026-03-29 02:17:31.083483 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-29 02:17:31.083492 | orchestrator | Sunday 29 March 2026 02:17:30 +0000 (0:00:00.463) 0:02:16.867 ********** 2026-03-29 02:17:31.083502 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:17:31.083512 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:17:31.083521 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:17:31.083531 | orchestrator | 2026-03-29 02:17:31.083540 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-29 02:17:31.083550 | orchestrator | Sunday 29 March 2026 02:17:30 +0000 (0:00:00.506) 0:02:17.374 ********** 2026-03-29 02:17:31.083560 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:17:31.083569 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:17:31.083579 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:17:31.083588 | orchestrator | 2026-03-29 02:17:31.083598 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-29 02:17:31.083608 | orchestrator | Sunday 29 March 2026 02:17:30 +0000 (0:00:00.339) 0:02:17.714 ********** 2026-03-29 02:17:31.083623 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:19:08.712819 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:19:08.712913 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:19:08.712921 | orchestrator | 2026-03-29 02:19:08.712926 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-29 02:19:08.712932 | orchestrator | Sunday 29 March 2026 02:17:31 +0000 (0:00:00.305) 0:02:18.019 ********** 2026-03-29 02:19:08.712936 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:19:08.712940 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:19:08.712944 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:19:08.712948 | orchestrator | 2026-03-29 02:19:08.712952 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-29 02:19:08.712956 | orchestrator | Sunday 29 March 2026 02:17:31 +0000 (0:00:00.652) 0:02:18.671 ********** 2026-03-29 02:19:08.712960 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:19:08.712964 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:19:08.712968 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:19:08.712971 | orchestrator | 2026-03-29 02:19:08.712975 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-29 02:19:08.712979 | orchestrator | Sunday 29 March 2026 02:17:33 +0000 (0:00:01.499) 0:02:20.170 ********** 2026-03-29 02:19:08.712983 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:19:08.712987 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:19:08.712990 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:19:08.712994 | orchestrator | 2026-03-29 02:19:08.712998 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-29 02:19:08.713002 | orchestrator | Sunday 29 March 2026 02:17:34 +0000 (0:00:01.291) 0:02:21.462 ********** 2026-03-29 02:19:08.713005 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:19:08.713009 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:19:08.713013 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:19:08.713017 | orchestrator | 2026-03-29 02:19:08.713021 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 02:19:08.713042 | orchestrator | 2026-03-29 02:19:08.713047 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 02:19:08.713050 | orchestrator | Sunday 29 March 2026 02:17:44 +0000 (0:00:09.912) 0:02:31.374 ********** 2026-03-29 02:19:08.713054 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713059 | orchestrator | 2026-03-29 02:19:08.713063 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 02:19:08.713067 | orchestrator | Sunday 29 March 2026 02:17:45 +0000 (0:00:00.805) 0:02:32.180 ********** 2026-03-29 02:19:08.713070 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713074 | orchestrator | 2026-03-29 02:19:08.713078 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 02:19:08.713082 | orchestrator | Sunday 29 March 2026 02:17:46 +0000 (0:00:00.641) 0:02:32.821 ********** 2026-03-29 02:19:08.713086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 02:19:08.713090 | orchestrator | 2026-03-29 02:19:08.713093 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 02:19:08.713097 | orchestrator | Sunday 29 March 2026 02:17:46 +0000 (0:00:00.542) 0:02:33.364 ********** 2026-03-29 02:19:08.713101 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713105 | orchestrator | 2026-03-29 02:19:08.713108 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 02:19:08.713112 | orchestrator | Sunday 29 March 2026 02:17:47 +0000 (0:00:00.857) 0:02:34.222 ********** 2026-03-29 02:19:08.713116 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713119 | orchestrator | 2026-03-29 02:19:08.713123 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 02:19:08.713127 | orchestrator | Sunday 29 March 2026 02:17:47 +0000 (0:00:00.583) 0:02:34.806 ********** 2026-03-29 02:19:08.713131 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 02:19:08.713135 | orchestrator | 2026-03-29 02:19:08.713138 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 02:19:08.713142 | orchestrator | Sunday 29 March 2026 02:17:49 +0000 (0:00:01.563) 0:02:36.370 ********** 2026-03-29 02:19:08.713146 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 02:19:08.713150 | orchestrator | 2026-03-29 02:19:08.713164 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 02:19:08.713172 | orchestrator | Sunday 29 March 2026 02:17:50 +0000 (0:00:00.833) 0:02:37.203 ********** 2026-03-29 02:19:08.713176 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713180 | orchestrator | 2026-03-29 02:19:08.713184 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 02:19:08.713187 | orchestrator | Sunday 29 March 2026 02:17:50 +0000 (0:00:00.431) 0:02:37.635 ********** 2026-03-29 02:19:08.713191 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713195 | orchestrator | 2026-03-29 02:19:08.713199 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-29 02:19:08.713202 | orchestrator | 2026-03-29 02:19:08.713206 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-29 02:19:08.713211 | orchestrator | Sunday 29 March 2026 02:17:51 +0000 (0:00:00.478) 0:02:38.114 ********** 2026-03-29 02:19:08.713214 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713218 | orchestrator | 2026-03-29 02:19:08.713222 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-29 02:19:08.713226 | orchestrator | Sunday 29 March 2026 02:17:51 +0000 (0:00:00.391) 0:02:38.505 ********** 2026-03-29 02:19:08.713229 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 02:19:08.713234 | orchestrator | 2026-03-29 02:19:08.713238 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-29 02:19:08.713242 | orchestrator | Sunday 29 March 2026 02:17:51 +0000 (0:00:00.252) 0:02:38.757 ********** 2026-03-29 02:19:08.713245 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713249 | orchestrator | 2026-03-29 02:19:08.713257 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-29 02:19:08.713260 | orchestrator | Sunday 29 March 2026 02:17:52 +0000 (0:00:00.815) 0:02:39.573 ********** 2026-03-29 02:19:08.713264 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713268 | orchestrator | 2026-03-29 02:19:08.713283 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-29 02:19:08.713287 | orchestrator | Sunday 29 March 2026 02:17:54 +0000 (0:00:01.681) 0:02:41.254 ********** 2026-03-29 02:19:08.713291 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713295 | orchestrator | 2026-03-29 02:19:08.713298 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-29 02:19:08.713302 | orchestrator | Sunday 29 March 2026 02:17:55 +0000 (0:00:00.826) 0:02:42.080 ********** 2026-03-29 02:19:08.713306 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713309 | orchestrator | 2026-03-29 02:19:08.713313 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-29 02:19:08.713317 | orchestrator | Sunday 29 March 2026 02:17:55 +0000 (0:00:00.469) 0:02:42.549 ********** 2026-03-29 02:19:08.713320 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713324 | orchestrator | 2026-03-29 02:19:08.713328 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-29 02:19:08.713332 | orchestrator | Sunday 29 March 2026 02:18:03 +0000 (0:00:07.571) 0:02:50.121 ********** 2026-03-29 02:19:08.713335 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:08.713340 | orchestrator | 2026-03-29 02:19:08.713344 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-29 02:19:08.713349 | orchestrator | Sunday 29 March 2026 02:18:15 +0000 (0:00:12.375) 0:03:02.497 ********** 2026-03-29 02:19:08.713353 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:08.713358 | orchestrator | 2026-03-29 02:19:08.713362 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-29 02:19:08.713367 | orchestrator | 2026-03-29 02:19:08.713371 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-29 02:19:08.713376 | orchestrator | Sunday 29 March 2026 02:18:16 +0000 (0:00:00.767) 0:03:03.264 ********** 2026-03-29 02:19:08.713380 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:19:08.713384 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:19:08.713389 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:19:08.713393 | orchestrator | 2026-03-29 02:19:08.713397 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-29 02:19:08.713402 | orchestrator | Sunday 29 March 2026 02:18:16 +0000 (0:00:00.308) 0:03:03.573 ********** 2026-03-29 02:19:08.713406 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713410 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:19:08.713415 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:19:08.713419 | orchestrator | 2026-03-29 02:19:08.713423 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-29 02:19:08.713427 | orchestrator | Sunday 29 March 2026 02:18:17 +0000 (0:00:00.322) 0:03:03.896 ********** 2026-03-29 02:19:08.713432 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:19:08.713437 | orchestrator | 2026-03-29 02:19:08.713441 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-29 02:19:08.713446 | orchestrator | Sunday 29 March 2026 02:18:17 +0000 (0:00:00.685) 0:03:04.582 ********** 2026-03-29 02:19:08.713450 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 02:19:08.713455 | orchestrator | 2026-03-29 02:19:08.713459 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-29 02:19:08.713464 | orchestrator | Sunday 29 March 2026 02:18:18 +0000 (0:00:00.809) 0:03:05.391 ********** 2026-03-29 02:19:08.713469 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 02:19:08.713473 | orchestrator | 2026-03-29 02:19:08.713477 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-29 02:19:08.713485 | orchestrator | Sunday 29 March 2026 02:18:19 +0000 (0:00:00.815) 0:03:06.206 ********** 2026-03-29 02:19:08.713489 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713494 | orchestrator | 2026-03-29 02:19:08.713498 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-29 02:19:08.713502 | orchestrator | Sunday 29 March 2026 02:18:19 +0000 (0:00:00.131) 0:03:06.337 ********** 2026-03-29 02:19:08.713507 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 02:19:08.713511 | orchestrator | 2026-03-29 02:19:08.713516 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-29 02:19:08.713520 | orchestrator | Sunday 29 March 2026 02:18:20 +0000 (0:00:01.062) 0:03:07.399 ********** 2026-03-29 02:19:08.713524 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713528 | orchestrator | 2026-03-29 02:19:08.713533 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-29 02:19:08.713537 | orchestrator | Sunday 29 March 2026 02:18:20 +0000 (0:00:00.118) 0:03:07.518 ********** 2026-03-29 02:19:08.713541 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713546 | orchestrator | 2026-03-29 02:19:08.713550 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-29 02:19:08.713554 | orchestrator | Sunday 29 March 2026 02:18:20 +0000 (0:00:00.122) 0:03:07.640 ********** 2026-03-29 02:19:08.713559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713563 | orchestrator | 2026-03-29 02:19:08.713567 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-29 02:19:08.713574 | orchestrator | Sunday 29 March 2026 02:18:20 +0000 (0:00:00.138) 0:03:07.778 ********** 2026-03-29 02:19:08.713579 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:08.713584 | orchestrator | 2026-03-29 02:19:08.713588 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-29 02:19:08.713593 | orchestrator | Sunday 29 March 2026 02:18:21 +0000 (0:00:00.125) 0:03:07.904 ********** 2026-03-29 02:19:08.713597 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 02:19:08.713603 | orchestrator | 2026-03-29 02:19:08.713610 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-29 02:19:08.713616 | orchestrator | Sunday 29 March 2026 02:18:26 +0000 (0:00:05.364) 0:03:13.268 ********** 2026-03-29 02:19:08.713622 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-29 02:19:08.713629 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-29 02:19:08.713640 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-29 02:19:32.176582 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-29 02:19:32.176739 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-29 02:19:32.176799 | orchestrator | 2026-03-29 02:19:32.176835 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-29 02:19:32.176858 | orchestrator | Sunday 29 March 2026 02:19:08 +0000 (0:00:42.252) 0:03:55.520 ********** 2026-03-29 02:19:32.176878 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 02:19:32.176898 | orchestrator | 2026-03-29 02:19:32.176911 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-29 02:19:32.176922 | orchestrator | Sunday 29 March 2026 02:19:09 +0000 (0:00:01.277) 0:03:56.797 ********** 2026-03-29 02:19:32.176933 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 02:19:32.176944 | orchestrator | 2026-03-29 02:19:32.176956 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-29 02:19:32.176967 | orchestrator | Sunday 29 March 2026 02:19:11 +0000 (0:00:01.557) 0:03:58.354 ********** 2026-03-29 02:19:32.176978 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 02:19:32.176988 | orchestrator | 2026-03-29 02:19:32.176999 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-29 02:19:32.177011 | orchestrator | Sunday 29 March 2026 02:19:12 +0000 (0:00:01.321) 0:03:59.676 ********** 2026-03-29 02:19:32.177051 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:32.177063 | orchestrator | 2026-03-29 02:19:32.177074 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-29 02:19:32.177085 | orchestrator | Sunday 29 March 2026 02:19:12 +0000 (0:00:00.129) 0:03:59.805 ********** 2026-03-29 02:19:32.177095 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-29 02:19:32.177108 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-29 02:19:32.177121 | orchestrator | 2026-03-29 02:19:32.177138 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-29 02:19:32.177156 | orchestrator | Sunday 29 March 2026 02:19:14 +0000 (0:00:01.852) 0:04:01.658 ********** 2026-03-29 02:19:32.177173 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:32.177191 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:19:32.177210 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:19:32.177228 | orchestrator | 2026-03-29 02:19:32.177246 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-29 02:19:32.177264 | orchestrator | Sunday 29 March 2026 02:19:15 +0000 (0:00:00.297) 0:04:01.955 ********** 2026-03-29 02:19:32.177284 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:19:32.177305 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:19:32.177324 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:19:32.177345 | orchestrator | 2026-03-29 02:19:32.177359 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-29 02:19:32.177372 | orchestrator | 2026-03-29 02:19:32.177384 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-29 02:19:32.177397 | orchestrator | Sunday 29 March 2026 02:19:15 +0000 (0:00:00.842) 0:04:02.798 ********** 2026-03-29 02:19:32.177409 | orchestrator | ok: [testbed-manager] 2026-03-29 02:19:32.177421 | orchestrator | 2026-03-29 02:19:32.177434 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-29 02:19:32.177447 | orchestrator | Sunday 29 March 2026 02:19:16 +0000 (0:00:00.311) 0:04:03.110 ********** 2026-03-29 02:19:32.177460 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 02:19:32.177472 | orchestrator | 2026-03-29 02:19:32.177484 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-29 02:19:32.177497 | orchestrator | Sunday 29 March 2026 02:19:16 +0000 (0:00:00.228) 0:04:03.338 ********** 2026-03-29 02:19:32.177509 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:32.177522 | orchestrator | 2026-03-29 02:19:32.177535 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-29 02:19:32.177547 | orchestrator | 2026-03-29 02:19:32.177560 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-29 02:19:32.177572 | orchestrator | Sunday 29 March 2026 02:19:22 +0000 (0:00:05.515) 0:04:08.854 ********** 2026-03-29 02:19:32.177584 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:19:32.177596 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:19:32.177608 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:19:32.177621 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:19:32.177632 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:19:32.177644 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:19:32.177656 | orchestrator | 2026-03-29 02:19:32.177668 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-29 02:19:32.177680 | orchestrator | Sunday 29 March 2026 02:19:22 +0000 (0:00:00.574) 0:04:09.428 ********** 2026-03-29 02:19:32.177692 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 02:19:32.177705 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 02:19:32.177717 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 02:19:32.177729 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 02:19:32.177783 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 02:19:32.177804 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 02:19:32.177823 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 02:19:32.177843 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 02:19:32.177865 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 02:19:32.177910 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 02:19:32.177924 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 02:19:32.177938 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 02:19:32.177951 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 02:19:32.177963 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 02:19:32.177982 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 02:19:32.178093 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 02:19:32.178120 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 02:19:32.178138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 02:19:32.178192 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 02:19:32.178212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 02:19:32.178231 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 02:19:32.178249 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 02:19:32.178266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 02:19:32.178284 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 02:19:32.178302 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 02:19:32.178321 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 02:19:32.178340 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 02:19:32.178359 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 02:19:32.178378 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 02:19:32.178396 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 02:19:32.178415 | orchestrator | 2026-03-29 02:19:32.178433 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-29 02:19:32.178453 | orchestrator | Sunday 29 March 2026 02:19:31 +0000 (0:00:08.393) 0:04:17.822 ********** 2026-03-29 02:19:32.178472 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:19:32.178492 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:19:32.178512 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:19:32.178531 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:32.178551 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:19:32.178569 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:19:32.178587 | orchestrator | 2026-03-29 02:19:32.178606 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-29 02:19:32.178625 | orchestrator | Sunday 29 March 2026 02:19:31 +0000 (0:00:00.507) 0:04:18.329 ********** 2026-03-29 02:19:32.178642 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:19:32.178677 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:19:32.178697 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:19:32.178716 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:19:32.178735 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:19:32.178831 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:19:32.178850 | orchestrator | 2026-03-29 02:19:32.178869 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:19:32.178889 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:19:32.178911 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 02:19:32.178929 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 02:19:32.178950 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 02:19:32.178969 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 02:19:32.178988 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 02:19:32.179006 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 02:19:32.179026 | orchestrator | 2026-03-29 02:19:32.179044 | orchestrator | 2026-03-29 02:19:32.179064 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:19:32.179084 | orchestrator | Sunday 29 March 2026 02:19:32 +0000 (0:00:00.637) 0:04:18.967 ********** 2026-03-29 02:19:32.179122 | orchestrator | =============================================================================== 2026-03-29 02:19:32.529701 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.87s 2026-03-29 02:19:32.529828 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.25s 2026-03-29 02:19:32.529841 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.43s 2026-03-29 02:19:32.529850 | orchestrator | kubectl : Install required packages ------------------------------------ 12.38s 2026-03-29 02:19:32.529858 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.91s 2026-03-29 02:19:32.529866 | orchestrator | Manage labels ----------------------------------------------------------- 8.39s 2026-03-29 02:19:32.529873 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.57s 2026-03-29 02:19:32.529881 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.52s 2026-03-29 02:19:32.529889 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.49s 2026-03-29 02:19:32.529902 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.36s 2026-03-29 02:19:32.529917 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2026-03-29 02:19:32.529932 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.93s 2026-03-29 02:19:32.529940 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.31s 2026-03-29 02:19:32.529948 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.23s 2026-03-29 02:19:32.529956 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.02s 2026-03-29 02:19:32.529963 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.85s 2026-03-29 02:19:32.529971 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.75s 2026-03-29 02:19:32.530002 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.68s 2026-03-29 02:19:32.530010 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.56s 2026-03-29 02:19:32.530067 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.56s 2026-03-29 02:19:32.843229 | orchestrator | + osism apply copy-kubeconfig 2026-03-29 02:19:44.955073 | orchestrator | 2026-03-29 02:19:44 | INFO  | Task 8c20d6bd-42dd-4004-ac22-2a7a3c7d29ac (copy-kubeconfig) was prepared for execution. 2026-03-29 02:19:44.955165 | orchestrator | 2026-03-29 02:19:44 | INFO  | It takes a moment until task 8c20d6bd-42dd-4004-ac22-2a7a3c7d29ac (copy-kubeconfig) has been started and output is visible here. 2026-03-29 02:19:51.808419 | orchestrator | 2026-03-29 02:19:51.808528 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-29 02:19:51.808542 | orchestrator | 2026-03-29 02:19:51.808563 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 02:19:51.808573 | orchestrator | Sunday 29 March 2026 02:19:49 +0000 (0:00:00.151) 0:00:00.151 ********** 2026-03-29 02:19:51.808583 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 02:19:51.808592 | orchestrator | 2026-03-29 02:19:51.808601 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 02:19:51.808610 | orchestrator | Sunday 29 March 2026 02:19:49 +0000 (0:00:00.732) 0:00:00.884 ********** 2026-03-29 02:19:51.808640 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:51.808667 | orchestrator | 2026-03-29 02:19:51.808683 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-29 02:19:51.808711 | orchestrator | Sunday 29 March 2026 02:19:51 +0000 (0:00:01.205) 0:00:02.089 ********** 2026-03-29 02:19:51.808733 | orchestrator | changed: [testbed-manager] 2026-03-29 02:19:51.808742 | orchestrator | 2026-03-29 02:19:51.808799 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:19:51.808810 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:19:51.808820 | orchestrator | 2026-03-29 02:19:51.808829 | orchestrator | 2026-03-29 02:19:51.808838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:19:51.808847 | orchestrator | Sunday 29 March 2026 02:19:51 +0000 (0:00:00.467) 0:00:02.557 ********** 2026-03-29 02:19:51.808855 | orchestrator | =============================================================================== 2026-03-29 02:19:51.808864 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2026-03-29 02:19:51.808872 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-03-29 02:19:51.808881 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-03-29 02:19:52.137284 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-29 02:20:04.199469 | orchestrator | 2026-03-29 02:20:04 | INFO  | Task a6b382aa-8982-4cd1-98b5-2ef69d911611 (openstackclient) was prepared for execution. 2026-03-29 02:20:04.199586 | orchestrator | 2026-03-29 02:20:04 | INFO  | It takes a moment until task a6b382aa-8982-4cd1-98b5-2ef69d911611 (openstackclient) has been started and output is visible here. 2026-03-29 02:20:51.497511 | orchestrator | 2026-03-29 02:20:51.497652 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-29 02:20:51.497681 | orchestrator | 2026-03-29 02:20:51.497693 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-29 02:20:51.497705 | orchestrator | Sunday 29 March 2026 02:20:08 +0000 (0:00:00.222) 0:00:00.222 ********** 2026-03-29 02:20:51.497717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-29 02:20:51.497730 | orchestrator | 2026-03-29 02:20:51.497852 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-29 02:20:51.497875 | orchestrator | Sunday 29 March 2026 02:20:08 +0000 (0:00:00.210) 0:00:00.432 ********** 2026-03-29 02:20:51.497893 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-29 02:20:51.497913 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-29 02:20:51.497933 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-29 02:20:51.497953 | orchestrator | 2026-03-29 02:20:51.497971 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-29 02:20:51.497986 | orchestrator | Sunday 29 March 2026 02:20:09 +0000 (0:00:01.210) 0:00:01.643 ********** 2026-03-29 02:20:51.497997 | orchestrator | changed: [testbed-manager] 2026-03-29 02:20:51.498009 | orchestrator | 2026-03-29 02:20:51.498080 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-29 02:20:51.498094 | orchestrator | Sunday 29 March 2026 02:20:11 +0000 (0:00:01.401) 0:00:03.045 ********** 2026-03-29 02:20:51.498106 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-29 02:20:51.498120 | orchestrator | ok: [testbed-manager] 2026-03-29 02:20:51.498132 | orchestrator | 2026-03-29 02:20:51.498145 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-29 02:20:51.498158 | orchestrator | Sunday 29 March 2026 02:20:46 +0000 (0:00:34.808) 0:00:37.853 ********** 2026-03-29 02:20:51.498171 | orchestrator | changed: [testbed-manager] 2026-03-29 02:20:51.498183 | orchestrator | 2026-03-29 02:20:51.498196 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-29 02:20:51.498208 | orchestrator | Sunday 29 March 2026 02:20:46 +0000 (0:00:00.952) 0:00:38.805 ********** 2026-03-29 02:20:51.498221 | orchestrator | ok: [testbed-manager] 2026-03-29 02:20:51.498233 | orchestrator | 2026-03-29 02:20:51.498246 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-29 02:20:51.498259 | orchestrator | Sunday 29 March 2026 02:20:47 +0000 (0:00:00.645) 0:00:39.451 ********** 2026-03-29 02:20:51.498272 | orchestrator | changed: [testbed-manager] 2026-03-29 02:20:51.498285 | orchestrator | 2026-03-29 02:20:51.498297 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-29 02:20:51.498308 | orchestrator | Sunday 29 March 2026 02:20:49 +0000 (0:00:01.585) 0:00:41.036 ********** 2026-03-29 02:20:51.498319 | orchestrator | changed: [testbed-manager] 2026-03-29 02:20:51.498330 | orchestrator | 2026-03-29 02:20:51.498341 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-29 02:20:51.498352 | orchestrator | Sunday 29 March 2026 02:20:49 +0000 (0:00:00.757) 0:00:41.793 ********** 2026-03-29 02:20:51.498362 | orchestrator | changed: [testbed-manager] 2026-03-29 02:20:51.498373 | orchestrator | 2026-03-29 02:20:51.498384 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-29 02:20:51.498395 | orchestrator | Sunday 29 March 2026 02:20:50 +0000 (0:00:00.588) 0:00:42.382 ********** 2026-03-29 02:20:51.498406 | orchestrator | ok: [testbed-manager] 2026-03-29 02:20:51.498416 | orchestrator | 2026-03-29 02:20:51.498427 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:20:51.498438 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:20:51.498450 | orchestrator | 2026-03-29 02:20:51.498461 | orchestrator | 2026-03-29 02:20:51.498472 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:20:51.498483 | orchestrator | Sunday 29 March 2026 02:20:51 +0000 (0:00:00.460) 0:00:42.843 ********** 2026-03-29 02:20:51.498494 | orchestrator | =============================================================================== 2026-03-29 02:20:51.498504 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.81s 2026-03-29 02:20:51.498515 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.59s 2026-03-29 02:20:51.498538 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.40s 2026-03-29 02:20:51.498549 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.21s 2026-03-29 02:20:51.498560 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2026-03-29 02:20:51.498571 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.76s 2026-03-29 02:20:51.498581 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-03-29 02:20:51.498592 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-03-29 02:20:51.498603 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2026-03-29 02:20:51.498614 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2026-03-29 02:20:53.948469 | orchestrator | 2026-03-29 02:20:53 | INFO  | Task 95ddcded-1c9e-49bc-83ab-97958dae3778 (common) was prepared for execution. 2026-03-29 02:20:53.948569 | orchestrator | 2026-03-29 02:20:53 | INFO  | It takes a moment until task 95ddcded-1c9e-49bc-83ab-97958dae3778 (common) has been started and output is visible here. 2026-03-29 02:21:06.545393 | orchestrator | 2026-03-29 02:21:06.545490 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-29 02:21:06.545501 | orchestrator | 2026-03-29 02:21:06.545508 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 02:21:06.545515 | orchestrator | Sunday 29 March 2026 02:20:58 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-29 02:21:06.545523 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:21:06.545530 | orchestrator | 2026-03-29 02:21:06.545537 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-29 02:21:06.545543 | orchestrator | Sunday 29 March 2026 02:20:59 +0000 (0:00:01.374) 0:00:01.659 ********** 2026-03-29 02:21:06.545551 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545557 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545565 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545572 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545578 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545585 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545592 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545599 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545606 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545628 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 02:21:06.545636 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545645 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545651 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545658 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545664 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545670 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545677 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 02:21:06.545702 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545709 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545716 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545723 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 02:21:06.545729 | orchestrator | 2026-03-29 02:21:06.545736 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 02:21:06.545743 | orchestrator | Sunday 29 March 2026 02:21:02 +0000 (0:00:02.801) 0:00:04.461 ********** 2026-03-29 02:21:06.545750 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:21:06.545757 | orchestrator | 2026-03-29 02:21:06.545763 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-29 02:21:06.545824 | orchestrator | Sunday 29 March 2026 02:21:03 +0000 (0:00:01.390) 0:00:05.852 ********** 2026-03-29 02:21:06.545834 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:06.545913 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:06.545920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:06.545932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768462 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:07.768519 | orchestrator | 2026-03-29 02:21:07.768524 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-29 02:21:07.768529 | orchestrator | Sunday 29 March 2026 02:21:07 +0000 (0:00:03.576) 0:00:09.428 ********** 2026-03-29 02:21:07.768535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:07.768539 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:07.768543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:07.768547 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:21:07.768553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:07.768563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346370 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:21:08.346409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:08.346416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346425 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:21:08.346429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:08.346435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346447 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:21:08.346466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:08.346478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346490 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:21:08.346496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:08.346502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:08.346515 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:21:08.346522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:08.346533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183339 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:21:09.183352 | orchestrator | 2026-03-29 02:21:09.183361 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-29 02:21:09.183370 | orchestrator | Sunday 29 March 2026 02:21:08 +0000 (0:00:00.919) 0:00:10.348 ********** 2026-03-29 02:21:09.183379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:09.183389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183398 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183405 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:21:09.183428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:09.183440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183475 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:21:09.183504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:09.183513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183528 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:21:09.183535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:09.183543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:09.183578 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:21:09.183586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:09.183608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178563 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:21:14.178581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:14.178595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178651 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:21:14.178662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 02:21:14.178696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:14.178718 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:21:14.178727 | orchestrator | 2026-03-29 02:21:14.178739 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-29 02:21:14.178751 | orchestrator | Sunday 29 March 2026 02:21:10 +0000 (0:00:01.740) 0:00:12.089 ********** 2026-03-29 02:21:14.178760 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:21:14.178792 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:21:14.178803 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:21:14.178812 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:21:14.178836 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:21:14.178847 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:21:14.178856 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:21:14.178866 | orchestrator | 2026-03-29 02:21:14.178876 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-29 02:21:14.178887 | orchestrator | Sunday 29 March 2026 02:21:10 +0000 (0:00:00.713) 0:00:12.802 ********** 2026-03-29 02:21:14.178897 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:21:14.178907 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:21:14.178916 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:21:14.178926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:21:14.178936 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:21:14.178945 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:21:14.178954 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:21:14.178964 | orchestrator | 2026-03-29 02:21:14.178973 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-29 02:21:14.178983 | orchestrator | Sunday 29 March 2026 02:21:11 +0000 (0:00:00.851) 0:00:13.653 ********** 2026-03-29 02:21:14.178996 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:14.179101 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.111954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:17.112092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112231 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:17.112557 | orchestrator | 2026-03-29 02:21:17.112588 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-29 02:21:17.112610 | orchestrator | Sunday 29 March 2026 02:21:15 +0000 (0:00:03.524) 0:00:17.178 ********** 2026-03-29 02:21:17.112632 | orchestrator | [WARNING]: Skipped 2026-03-29 02:21:17.112654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-29 02:21:17.112678 | orchestrator | to this access issue: 2026-03-29 02:21:17.112698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-29 02:21:17.112718 | orchestrator | directory 2026-03-29 02:21:17.112738 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:21:17.112898 | orchestrator | 2026-03-29 02:21:17.112926 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-29 02:21:17.112945 | orchestrator | Sunday 29 March 2026 02:21:16 +0000 (0:00:00.922) 0:00:18.100 ********** 2026-03-29 02:21:17.112963 | orchestrator | [WARNING]: Skipped 2026-03-29 02:21:17.112982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-29 02:21:17.113001 | orchestrator | to this access issue: 2026-03-29 02:21:17.113038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-29 02:21:27.350966 | orchestrator | directory 2026-03-29 02:21:27.351075 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:21:27.351092 | orchestrator | 2026-03-29 02:21:27.351105 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-29 02:21:27.351118 | orchestrator | Sunday 29 March 2026 02:21:17 +0000 (0:00:01.334) 0:00:19.435 ********** 2026-03-29 02:21:27.351165 | orchestrator | [WARNING]: Skipped 2026-03-29 02:21:27.351179 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-29 02:21:27.351193 | orchestrator | to this access issue: 2026-03-29 02:21:27.351205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-29 02:21:27.351217 | orchestrator | directory 2026-03-29 02:21:27.351228 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:21:27.351239 | orchestrator | 2026-03-29 02:21:27.351251 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-29 02:21:27.351262 | orchestrator | Sunday 29 March 2026 02:21:18 +0000 (0:00:00.841) 0:00:20.276 ********** 2026-03-29 02:21:27.351274 | orchestrator | [WARNING]: Skipped 2026-03-29 02:21:27.351286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-29 02:21:27.351298 | orchestrator | to this access issue: 2026-03-29 02:21:27.351310 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-29 02:21:27.351322 | orchestrator | directory 2026-03-29 02:21:27.351335 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 02:21:27.351346 | orchestrator | 2026-03-29 02:21:27.351357 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-29 02:21:27.351369 | orchestrator | Sunday 29 March 2026 02:21:19 +0000 (0:00:00.861) 0:00:21.138 ********** 2026-03-29 02:21:27.351381 | orchestrator | changed: [testbed-manager] 2026-03-29 02:21:27.351392 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:21:27.351404 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:21:27.351415 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:21:27.351426 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:21:27.351438 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:21:27.351465 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:21:27.351477 | orchestrator | 2026-03-29 02:21:27.351488 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-29 02:21:27.351500 | orchestrator | Sunday 29 March 2026 02:21:21 +0000 (0:00:02.721) 0:00:23.859 ********** 2026-03-29 02:21:27.351514 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351543 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351556 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351572 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351586 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351606 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 02:21:27.351619 | orchestrator | 2026-03-29 02:21:27.351636 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-29 02:21:27.351651 | orchestrator | Sunday 29 March 2026 02:21:24 +0000 (0:00:02.170) 0:00:26.030 ********** 2026-03-29 02:21:27.351666 | orchestrator | changed: [testbed-manager] 2026-03-29 02:21:27.351682 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:21:27.351695 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:21:27.351708 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:21:27.351721 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:21:27.351733 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:21:27.351744 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:21:27.351756 | orchestrator | 2026-03-29 02:21:27.351767 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-29 02:21:27.351789 | orchestrator | Sunday 29 March 2026 02:21:25 +0000 (0:00:01.954) 0:00:27.985 ********** 2026-03-29 02:21:27.351826 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:27.351865 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:27.351878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:27.351890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:27.351902 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:27.351919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:27.351933 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:27.351956 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:27.351968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:27.351989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:33.468333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:33.468443 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468461 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:33.468492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:33.468526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468539 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:33.468554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:21:33.468610 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468632 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468651 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468671 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:33.468692 | orchestrator | 2026-03-29 02:21:33.468716 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-29 02:21:33.468739 | orchestrator | Sunday 29 March 2026 02:21:27 +0000 (0:00:01.615) 0:00:29.600 ********** 2026-03-29 02:21:33.468759 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468865 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468886 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468904 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468924 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 02:21:33.468945 | orchestrator | 2026-03-29 02:21:33.468963 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-29 02:21:33.468982 | orchestrator | Sunday 29 March 2026 02:21:29 +0000 (0:00:02.051) 0:00:31.651 ********** 2026-03-29 02:21:33.468994 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469005 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469016 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469046 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469057 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469068 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 02:21:33.469078 | orchestrator | 2026-03-29 02:21:33.469089 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-29 02:21:33.469100 | orchestrator | Sunday 29 March 2026 02:21:31 +0000 (0:00:01.702) 0:00:33.354 ********** 2026-03-29 02:21:33.469111 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:33.469137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.116909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.117001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.117034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.117055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.117066 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 02:21:34.117085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117157 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:21:34.117195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:22:47.557220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:22:47.557365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:22:47.557394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:22:47.557429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:22:47.557446 | orchestrator | 2026-03-29 02:22:47.557465 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-29 02:22:47.557483 | orchestrator | Sunday 29 March 2026 02:21:34 +0000 (0:00:02.765) 0:00:36.119 ********** 2026-03-29 02:22:47.557499 | orchestrator | changed: [testbed-manager] 2026-03-29 02:22:47.557511 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:22:47.557520 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:22:47.557530 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:22:47.557540 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:22:47.557549 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:22:47.557559 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:22:47.557568 | orchestrator | 2026-03-29 02:22:47.557583 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-29 02:22:47.557599 | orchestrator | Sunday 29 March 2026 02:21:35 +0000 (0:00:01.405) 0:00:37.525 ********** 2026-03-29 02:22:47.557615 | orchestrator | changed: [testbed-manager] 2026-03-29 02:22:47.557630 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:22:47.557646 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:22:47.557662 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:22:47.557674 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:22:47.557683 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:22:47.557693 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:22:47.557702 | orchestrator | 2026-03-29 02:22:47.557712 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557721 | orchestrator | Sunday 29 March 2026 02:21:36 +0000 (0:00:01.075) 0:00:38.600 ********** 2026-03-29 02:22:47.557731 | orchestrator | 2026-03-29 02:22:47.557740 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557750 | orchestrator | Sunday 29 March 2026 02:21:36 +0000 (0:00:00.065) 0:00:38.666 ********** 2026-03-29 02:22:47.557761 | orchestrator | 2026-03-29 02:22:47.557772 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557784 | orchestrator | Sunday 29 March 2026 02:21:36 +0000 (0:00:00.064) 0:00:38.730 ********** 2026-03-29 02:22:47.557795 | orchestrator | 2026-03-29 02:22:47.557806 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557817 | orchestrator | Sunday 29 March 2026 02:21:36 +0000 (0:00:00.065) 0:00:38.796 ********** 2026-03-29 02:22:47.557828 | orchestrator | 2026-03-29 02:22:47.557839 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557860 | orchestrator | Sunday 29 March 2026 02:21:37 +0000 (0:00:00.274) 0:00:39.070 ********** 2026-03-29 02:22:47.557872 | orchestrator | 2026-03-29 02:22:47.557882 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557895 | orchestrator | Sunday 29 March 2026 02:21:37 +0000 (0:00:00.061) 0:00:39.131 ********** 2026-03-29 02:22:47.557913 | orchestrator | 2026-03-29 02:22:47.557929 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 02:22:47.557946 | orchestrator | Sunday 29 March 2026 02:21:37 +0000 (0:00:00.065) 0:00:39.196 ********** 2026-03-29 02:22:47.557962 | orchestrator | 2026-03-29 02:22:47.557977 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-29 02:22:47.557992 | orchestrator | Sunday 29 March 2026 02:21:37 +0000 (0:00:00.089) 0:00:39.286 ********** 2026-03-29 02:22:47.558009 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:22:47.558166 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:22:47.558181 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:22:47.558191 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:22:47.558200 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:22:47.558229 | orchestrator | changed: [testbed-manager] 2026-03-29 02:22:47.558240 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:22:47.558249 | orchestrator | 2026-03-29 02:22:47.558259 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-29 02:22:47.558268 | orchestrator | Sunday 29 March 2026 02:22:11 +0000 (0:00:33.977) 0:01:13.263 ********** 2026-03-29 02:22:47.558278 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:22:47.558287 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:22:47.558297 | orchestrator | changed: [testbed-manager] 2026-03-29 02:22:47.558306 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:22:47.558316 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:22:47.558325 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:22:47.558334 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:22:47.558344 | orchestrator | 2026-03-29 02:22:47.558353 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-29 02:22:47.558363 | orchestrator | Sunday 29 March 2026 02:22:37 +0000 (0:00:26.650) 0:01:39.914 ********** 2026-03-29 02:22:47.558372 | orchestrator | ok: [testbed-manager] 2026-03-29 02:22:47.558382 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:22:47.558392 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:22:47.558401 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:22:47.558411 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:22:47.558420 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:22:47.558429 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:22:47.558439 | orchestrator | 2026-03-29 02:22:47.558448 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-29 02:22:47.558458 | orchestrator | Sunday 29 March 2026 02:22:40 +0000 (0:00:02.136) 0:01:42.050 ********** 2026-03-29 02:22:47.558467 | orchestrator | changed: [testbed-manager] 2026-03-29 02:22:47.558477 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:22:47.558486 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:22:47.558496 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:22:47.558505 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:22:47.558514 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:22:47.558524 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:22:47.558533 | orchestrator | 2026-03-29 02:22:47.558543 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:22:47.558554 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558613 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558638 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558658 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558668 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558678 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558688 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 02:22:47.558697 | orchestrator | 2026-03-29 02:22:47.558707 | orchestrator | 2026-03-29 02:22:47.558717 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:22:47.558727 | orchestrator | Sunday 29 March 2026 02:22:47 +0000 (0:00:07.489) 0:01:49.539 ********** 2026-03-29 02:22:47.558736 | orchestrator | =============================================================================== 2026-03-29 02:22:47.558746 | orchestrator | common : Restart fluentd container ------------------------------------- 33.98s 2026-03-29 02:22:47.558756 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 26.65s 2026-03-29 02:22:47.558765 | orchestrator | common : Restart cron container ----------------------------------------- 7.49s 2026-03-29 02:22:47.558775 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.58s 2026-03-29 02:22:47.558784 | orchestrator | common : Copying over config.json files for services -------------------- 3.52s 2026-03-29 02:22:47.558794 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.80s 2026-03-29 02:22:47.558803 | orchestrator | common : Check common containers ---------------------------------------- 2.77s 2026-03-29 02:22:47.558813 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.72s 2026-03-29 02:22:47.558822 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.17s 2026-03-29 02:22:47.558832 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.14s 2026-03-29 02:22:47.558841 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.05s 2026-03-29 02:22:47.558851 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.95s 2026-03-29 02:22:47.558860 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.74s 2026-03-29 02:22:47.558870 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.70s 2026-03-29 02:22:47.558879 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.62s 2026-03-29 02:22:47.558889 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-03-29 02:22:47.558907 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-03-29 02:22:47.945927 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2026-03-29 02:22:47.946141 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.33s 2026-03-29 02:22:47.946157 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.08s 2026-03-29 02:22:50.234888 | orchestrator | 2026-03-29 02:22:50 | INFO  | Task 8e9ced18-38d1-4e8e-b304-3bacc0b9d564 (loadbalancer) was prepared for execution. 2026-03-29 02:22:50.234960 | orchestrator | 2026-03-29 02:22:50 | INFO  | It takes a moment until task 8e9ced18-38d1-4e8e-b304-3bacc0b9d564 (loadbalancer) has been started and output is visible here. 2026-03-29 02:23:04.021665 | orchestrator | 2026-03-29 02:23:04.021804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:23:04.021826 | orchestrator | 2026-03-29 02:23:04.021842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:23:04.021858 | orchestrator | Sunday 29 March 2026 02:22:54 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-03-29 02:23:04.021900 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:23:04.021917 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:23:04.021932 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:23:04.021946 | orchestrator | 2026-03-29 02:23:04.021960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:23:04.021975 | orchestrator | Sunday 29 March 2026 02:22:54 +0000 (0:00:00.308) 0:00:00.553 ********** 2026-03-29 02:23:04.021990 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-29 02:23:04.022002 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-29 02:23:04.022072 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-29 02:23:04.022152 | orchestrator | 2026-03-29 02:23:04.022201 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-29 02:23:04.022221 | orchestrator | 2026-03-29 02:23:04.022239 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 02:23:04.022272 | orchestrator | Sunday 29 March 2026 02:22:55 +0000 (0:00:00.406) 0:00:00.960 ********** 2026-03-29 02:23:04.022287 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:23:04.022301 | orchestrator | 2026-03-29 02:23:04.022315 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-29 02:23:04.022329 | orchestrator | Sunday 29 March 2026 02:22:55 +0000 (0:00:00.523) 0:00:01.483 ********** 2026-03-29 02:23:04.022343 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:23:04.022357 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:23:04.022370 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:23:04.022384 | orchestrator | 2026-03-29 02:23:04.022397 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 02:23:04.022411 | orchestrator | Sunday 29 March 2026 02:22:56 +0000 (0:00:00.594) 0:00:02.077 ********** 2026-03-29 02:23:04.022425 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:23:04.022443 | orchestrator | 2026-03-29 02:23:04.022457 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-29 02:23:04.022476 | orchestrator | Sunday 29 March 2026 02:22:56 +0000 (0:00:00.691) 0:00:02.768 ********** 2026-03-29 02:23:04.022497 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:23:04.022514 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:23:04.022531 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:23:04.022550 | orchestrator | 2026-03-29 02:23:04.022565 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-29 02:23:04.022579 | orchestrator | Sunday 29 March 2026 02:22:57 +0000 (0:00:00.616) 0:00:03.385 ********** 2026-03-29 02:23:04.022593 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022606 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022661 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 02:23:04.022675 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 02:23:04.022690 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 02:23:04.022705 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 02:23:04.022719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 02:23:04.022746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 02:23:04.022760 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 02:23:04.022774 | orchestrator | 2026-03-29 02:23:04.022788 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 02:23:04.022802 | orchestrator | Sunday 29 March 2026 02:22:59 +0000 (0:00:02.160) 0:00:05.546 ********** 2026-03-29 02:23:04.022816 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 02:23:04.022829 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 02:23:04.022843 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 02:23:04.022858 | orchestrator | 2026-03-29 02:23:04.022872 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 02:23:04.022886 | orchestrator | Sunday 29 March 2026 02:23:00 +0000 (0:00:00.711) 0:00:06.257 ********** 2026-03-29 02:23:04.022900 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 02:23:04.022913 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 02:23:04.022947 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 02:23:04.022961 | orchestrator | 2026-03-29 02:23:04.022975 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 02:23:04.022989 | orchestrator | Sunday 29 March 2026 02:23:01 +0000 (0:00:01.316) 0:00:07.574 ********** 2026-03-29 02:23:04.023003 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-29 02:23:04.023017 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:04.023054 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-29 02:23:04.023069 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:04.023110 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-29 02:23:04.023123 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:04.023135 | orchestrator | 2026-03-29 02:23:04.023148 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-29 02:23:04.023162 | orchestrator | Sunday 29 March 2026 02:23:02 +0000 (0:00:00.498) 0:00:08.073 ********** 2026-03-29 02:23:04.023189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:04.023210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:04.023225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:04.023250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:04.023266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:04.023307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:09.195177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:09.195256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:09.195263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:09.195267 | orchestrator | 2026-03-29 02:23:09.195273 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-29 02:23:09.195286 | orchestrator | Sunday 29 March 2026 02:23:03 +0000 (0:00:01.862) 0:00:09.935 ********** 2026-03-29 02:23:09.195290 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:23:09.195309 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:23:09.195313 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:23:09.195318 | orchestrator | 2026-03-29 02:23:09.195322 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-29 02:23:09.195326 | orchestrator | Sunday 29 March 2026 02:23:04 +0000 (0:00:00.901) 0:00:10.837 ********** 2026-03-29 02:23:09.195330 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-29 02:23:09.195334 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-29 02:23:09.195338 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-29 02:23:09.195342 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-29 02:23:09.195345 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-29 02:23:09.195349 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-29 02:23:09.195353 | orchestrator | 2026-03-29 02:23:09.195356 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-29 02:23:09.195360 | orchestrator | Sunday 29 March 2026 02:23:06 +0000 (0:00:01.469) 0:00:12.307 ********** 2026-03-29 02:23:09.195364 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:23:09.195368 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:23:09.195372 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:23:09.195375 | orchestrator | 2026-03-29 02:23:09.195379 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-29 02:23:09.195383 | orchestrator | Sunday 29 March 2026 02:23:07 +0000 (0:00:00.920) 0:00:13.227 ********** 2026-03-29 02:23:09.195387 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:23:09.195391 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:23:09.195394 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:23:09.195398 | orchestrator | 2026-03-29 02:23:09.195402 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-29 02:23:09.195406 | orchestrator | Sunday 29 March 2026 02:23:08 +0000 (0:00:01.304) 0:00:14.532 ********** 2026-03-29 02:23:09.195411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:09.195426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:09.195431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:09.195435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:09.195443 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:09.195448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:09.195475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:09.195480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:09.195484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:09.195488 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:09.195495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:11.939590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:11.939719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:11.939732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:11.939742 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:11.939753 | orchestrator | 2026-03-29 02:23:11.939763 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-29 02:23:11.939773 | orchestrator | Sunday 29 March 2026 02:23:09 +0000 (0:00:00.589) 0:00:15.121 ********** 2026-03-29 02:23:11.939781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:11.939871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:11.939880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:11.939897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:11.939918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:20.250755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:20.250952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb', '__omit_place_holder__8bca57817957870b3dc521db1912486c6648ceeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 02:23:20.250977 | orchestrator | 2026-03-29 02:23:20.250993 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-29 02:23:20.251009 | orchestrator | Sunday 29 March 2026 02:23:11 +0000 (0:00:02.742) 0:00:17.863 ********** 2026-03-29 02:23:20.251025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:20.251215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:20.251224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:20.251233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:20.251241 | orchestrator | 2026-03-29 02:23:20.251250 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-29 02:23:20.251259 | orchestrator | Sunday 29 March 2026 02:23:15 +0000 (0:00:03.238) 0:00:21.102 ********** 2026-03-29 02:23:20.251277 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 02:23:20.251287 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 02:23:20.251297 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 02:23:20.251307 | orchestrator | 2026-03-29 02:23:20.251317 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-29 02:23:20.251326 | orchestrator | Sunday 29 March 2026 02:23:16 +0000 (0:00:01.777) 0:00:22.880 ********** 2026-03-29 02:23:20.251336 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 02:23:20.251347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 02:23:20.251357 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 02:23:20.251366 | orchestrator | 2026-03-29 02:23:20.251376 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-29 02:23:20.251386 | orchestrator | Sunday 29 March 2026 02:23:19 +0000 (0:00:02.775) 0:00:25.655 ********** 2026-03-29 02:23:20.251396 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:20.251409 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:20.251419 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:20.251430 | orchestrator | 2026-03-29 02:23:20.251447 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-29 02:23:31.477633 | orchestrator | Sunday 29 March 2026 02:23:20 +0000 (0:00:00.522) 0:00:26.178 ********** 2026-03-29 02:23:31.477745 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 02:23:31.477775 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 02:23:31.477788 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 02:23:31.477800 | orchestrator | 2026-03-29 02:23:31.477812 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-29 02:23:31.477824 | orchestrator | Sunday 29 March 2026 02:23:22 +0000 (0:00:02.021) 0:00:28.199 ********** 2026-03-29 02:23:31.477835 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 02:23:31.477847 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 02:23:31.477858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 02:23:31.477869 | orchestrator | 2026-03-29 02:23:31.477880 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-29 02:23:31.477891 | orchestrator | Sunday 29 March 2026 02:23:24 +0000 (0:00:02.027) 0:00:30.227 ********** 2026-03-29 02:23:31.477902 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-29 02:23:31.477914 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-29 02:23:31.477925 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-29 02:23:31.477935 | orchestrator | 2026-03-29 02:23:31.477959 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-29 02:23:31.477971 | orchestrator | Sunday 29 March 2026 02:23:25 +0000 (0:00:01.382) 0:00:31.610 ********** 2026-03-29 02:23:31.477982 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-29 02:23:31.477993 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-29 02:23:31.478004 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-29 02:23:31.478076 | orchestrator | 2026-03-29 02:23:31.478113 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 02:23:31.478125 | orchestrator | Sunday 29 March 2026 02:23:27 +0000 (0:00:01.425) 0:00:33.035 ********** 2026-03-29 02:23:31.478136 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:23:31.478176 | orchestrator | 2026-03-29 02:23:31.478192 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-29 02:23:31.478204 | orchestrator | Sunday 29 March 2026 02:23:27 +0000 (0:00:00.510) 0:00:33.546 ********** 2026-03-29 02:23:31.478220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:31.478378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:31.478399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:31.478418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:31.478438 | orchestrator | 2026-03-29 02:23:31.478458 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-29 02:23:31.478477 | orchestrator | Sunday 29 March 2026 02:23:30 +0000 (0:00:03.293) 0:00:36.840 ********** 2026-03-29 02:23:31.478513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.223087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.223217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.223248 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:32.223257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.223264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.223270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.223276 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:32.223282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.223314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.223321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.223332 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:32.223338 | orchestrator | 2026-03-29 02:23:32.223345 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-29 02:23:32.223352 | orchestrator | Sunday 29 March 2026 02:23:31 +0000 (0:00:00.564) 0:00:37.405 ********** 2026-03-29 02:23:32.223359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.223365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.223371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.223377 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:32.223383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.223397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.993763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.993871 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:32.993884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.993894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.993901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.993909 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:32.993916 | orchestrator | 2026-03-29 02:23:32.993924 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 02:23:32.993933 | orchestrator | Sunday 29 March 2026 02:23:32 +0000 (0:00:00.741) 0:00:38.146 ********** 2026-03-29 02:23:32.993940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.993948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.993979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.993992 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:32.994000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.994007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.994078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.994087 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:32.994095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:32.994113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:32.994126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:32.994145 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:34.286714 | orchestrator | 2026-03-29 02:23:34.286797 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 02:23:34.286807 | orchestrator | Sunday 29 March 2026 02:23:32 +0000 (0:00:00.767) 0:00:38.914 ********** 2026-03-29 02:23:34.286817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:34.286828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:34.286836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:34.286843 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:34.286852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:34.286859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:34.286878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:34.286902 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:34.286921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:34.286928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:34.286935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:34.286941 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:34.286948 | orchestrator | 2026-03-29 02:23:34.286954 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 02:23:34.286961 | orchestrator | Sunday 29 March 2026 02:23:33 +0000 (0:00:00.544) 0:00:39.458 ********** 2026-03-29 02:23:34.286967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:34.286974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:34.286993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:34.287000 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:34.287021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:35.225107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:35.225324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:35.225354 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:35.225377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:35.225396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:35.225412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:35.225446 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:35.225460 | orchestrator | 2026-03-29 02:23:35.225478 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-29 02:23:35.225497 | orchestrator | Sunday 29 March 2026 02:23:34 +0000 (0:00:00.754) 0:00:40.213 ********** 2026-03-29 02:23:35.225529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:35.225619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:35.225636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:35.225648 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:35.225660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:35.225678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:35.225709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:35.225726 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:35.225752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:35.225781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:36.557697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:36.557823 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:36.557842 | orchestrator | 2026-03-29 02:23:36.557855 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-29 02:23:36.557869 | orchestrator | Sunday 29 March 2026 02:23:35 +0000 (0:00:00.932) 0:00:41.145 ********** 2026-03-29 02:23:36.557882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:36.557895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:36.557941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:36.557955 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:36.557967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:36.557999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:36.558081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:36.558100 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:36.558119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:36.558138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:36.558199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:36.558220 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:36.558239 | orchestrator | 2026-03-29 02:23:36.558260 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-29 02:23:36.558281 | orchestrator | Sunday 29 March 2026 02:23:35 +0000 (0:00:00.559) 0:00:41.705 ********** 2026-03-29 02:23:36.558301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 02:23:36.558323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:36.558371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:43.042006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:43.042146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 02:23:43.042161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:43.042240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:43.042251 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:43.042260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 02:23:43.042282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 02:23:43.042291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 02:23:43.042299 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:43.042307 | orchestrator | 2026-03-29 02:23:43.042317 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-29 02:23:43.042327 | orchestrator | Sunday 29 March 2026 02:23:36 +0000 (0:00:00.774) 0:00:42.480 ********** 2026-03-29 02:23:43.042336 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 02:23:43.042360 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 02:23:43.042369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 02:23:43.042377 | orchestrator | 2026-03-29 02:23:43.042385 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-29 02:23:43.042394 | orchestrator | Sunday 29 March 2026 02:23:38 +0000 (0:00:01.636) 0:00:44.116 ********** 2026-03-29 02:23:43.042403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 02:23:43.042412 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 02:23:43.042420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 02:23:43.042428 | orchestrator | 2026-03-29 02:23:43.042442 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-29 02:23:43.042450 | orchestrator | Sunday 29 March 2026 02:23:39 +0000 (0:00:01.662) 0:00:45.779 ********** 2026-03-29 02:23:43.042458 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 02:23:43.042466 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 02:23:43.042474 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 02:23:43.042482 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 02:23:43.042490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:43.042499 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 02:23:43.042507 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:43.042515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 02:23:43.042523 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:43.042531 | orchestrator | 2026-03-29 02:23:43.042540 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-29 02:23:43.042548 | orchestrator | Sunday 29 March 2026 02:23:40 +0000 (0:00:00.786) 0:00:46.566 ********** 2026-03-29 02:23:43.042557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:43.042566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:43.042580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 02:23:43.042596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:47.027275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:47.027400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 02:23:47.027427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:47.027446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:47.027466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 02:23:47.027487 | orchestrator | 2026-03-29 02:23:47.027529 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-29 02:23:47.027552 | orchestrator | Sunday 29 March 2026 02:23:43 +0000 (0:00:02.400) 0:00:48.966 ********** 2026-03-29 02:23:47.027567 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:23:47.027578 | orchestrator | 2026-03-29 02:23:47.027589 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-29 02:23:47.027599 | orchestrator | Sunday 29 March 2026 02:23:43 +0000 (0:00:00.761) 0:00:49.727 ********** 2026-03-29 02:23:47.027632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 02:23:47.027672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:47.027686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.027697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.027709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 02:23:47.027726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:47.027737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.027766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 02:23:47.670780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:47.670795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670831 | orchestrator | 2026-03-29 02:23:47.670842 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-29 02:23:47.670853 | orchestrator | Sunday 29 March 2026 02:23:47 +0000 (0:00:03.224) 0:00:52.952 ********** 2026-03-29 02:23:47.670863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 02:23:47.670911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:47.670923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670941 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:47.670951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 02:23:47.670965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:47.670981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:47.670999 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:47.671015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 02:23:55.806379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 02:23:55.806496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:55.806512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 02:23:55.806553 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:55.806568 | orchestrator | 2026-03-29 02:23:55.806581 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-29 02:23:55.806594 | orchestrator | Sunday 29 March 2026 02:23:47 +0000 (0:00:00.643) 0:00:53.596 ********** 2026-03-29 02:23:55.806606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806633 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:55.806660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806683 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:55.806694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 02:23:55.806719 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:23:55.806738 | orchestrator | 2026-03-29 02:23:55.806757 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-29 02:23:55.806775 | orchestrator | Sunday 29 March 2026 02:23:48 +0000 (0:00:01.074) 0:00:54.670 ********** 2026-03-29 02:23:55.806793 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:23:55.806813 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:23:55.806833 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:23:55.806853 | orchestrator | 2026-03-29 02:23:55.806874 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-29 02:23:55.806892 | orchestrator | Sunday 29 March 2026 02:23:49 +0000 (0:00:01.261) 0:00:55.931 ********** 2026-03-29 02:23:55.806904 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:23:55.806916 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:23:55.806928 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:23:55.806941 | orchestrator | 2026-03-29 02:23:55.806954 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-29 02:23:55.806967 | orchestrator | Sunday 29 March 2026 02:23:51 +0000 (0:00:01.953) 0:00:57.884 ********** 2026-03-29 02:23:55.806979 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:23:55.806992 | orchestrator | 2026-03-29 02:23:55.807024 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-29 02:23:55.807038 | orchestrator | Sunday 29 March 2026 02:23:52 +0000 (0:00:00.599) 0:00:58.484 ********** 2026-03-29 02:23:55.807055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 02:23:55.807088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:55.807104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:23:55.807118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 02:23:55.807132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:55.807154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 02:23:56.387570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387588 | orchestrator | 2026-03-29 02:23:56.387598 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-29 02:23:56.387607 | orchestrator | Sunday 29 March 2026 02:23:55 +0000 (0:00:03.247) 0:01:01.731 ********** 2026-03-29 02:23:56.387616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 02:23:56.387624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387673 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:23:56.387685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 02:23:56.387693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:23:56.387707 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:23:56.387714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 02:23:56.387732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 02:24:05.795313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:05.795426 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:05.795444 | orchestrator | 2026-03-29 02:24:05.795457 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-29 02:24:05.795470 | orchestrator | Sunday 29 March 2026 02:23:56 +0000 (0:00:00.578) 0:01:02.309 ********** 2026-03-29 02:24:05.795498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795525 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:05.795537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795559 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:05.795574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 02:24:05.795612 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:05.795630 | orchestrator | 2026-03-29 02:24:05.795649 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-29 02:24:05.795667 | orchestrator | Sunday 29 March 2026 02:23:57 +0000 (0:00:00.821) 0:01:03.131 ********** 2026-03-29 02:24:05.795684 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:05.795700 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:05.795719 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:05.795736 | orchestrator | 2026-03-29 02:24:05.795755 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-29 02:24:05.795773 | orchestrator | Sunday 29 March 2026 02:23:58 +0000 (0:00:01.656) 0:01:04.788 ********** 2026-03-29 02:24:05.795822 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:05.795842 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:05.795861 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:05.795881 | orchestrator | 2026-03-29 02:24:05.795900 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-29 02:24:05.795918 | orchestrator | Sunday 29 March 2026 02:24:00 +0000 (0:00:01.956) 0:01:06.745 ********** 2026-03-29 02:24:05.795931 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:05.795944 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:05.795957 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:05.795969 | orchestrator | 2026-03-29 02:24:05.795981 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-29 02:24:05.795995 | orchestrator | Sunday 29 March 2026 02:24:01 +0000 (0:00:00.290) 0:01:07.036 ********** 2026-03-29 02:24:05.796008 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:05.796020 | orchestrator | 2026-03-29 02:24:05.796033 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-29 02:24:05.796045 | orchestrator | Sunday 29 March 2026 02:24:01 +0000 (0:00:00.606) 0:01:07.642 ********** 2026-03-29 02:24:05.796081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 02:24:05.796106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 02:24:05.796121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 02:24:05.796134 | orchestrator | 2026-03-29 02:24:05.796148 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-29 02:24:05.796161 | orchestrator | Sunday 29 March 2026 02:24:04 +0000 (0:00:02.780) 0:01:10.423 ********** 2026-03-29 02:24:05.796185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 02:24:05.796199 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:05.796210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 02:24:05.796221 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:05.796310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 02:24:13.219563 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:13.219687 | orchestrator | 2026-03-29 02:24:13.219719 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-29 02:24:13.219734 | orchestrator | Sunday 29 March 2026 02:24:05 +0000 (0:00:01.295) 0:01:11.718 ********** 2026-03-29 02:24:13.219763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219792 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:13.219804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219869 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:13.219886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 02:24:13.219922 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:13.219938 | orchestrator | 2026-03-29 02:24:13.219954 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-29 02:24:13.219972 | orchestrator | Sunday 29 March 2026 02:24:07 +0000 (0:00:01.552) 0:01:13.270 ********** 2026-03-29 02:24:13.219989 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:13.220009 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:13.220027 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:13.220046 | orchestrator | 2026-03-29 02:24:13.220069 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-29 02:24:13.220089 | orchestrator | Sunday 29 March 2026 02:24:07 +0000 (0:00:00.414) 0:01:13.685 ********** 2026-03-29 02:24:13.220102 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:13.220115 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:13.220128 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:13.220140 | orchestrator | 2026-03-29 02:24:13.220153 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-29 02:24:13.220165 | orchestrator | Sunday 29 March 2026 02:24:08 +0000 (0:00:01.222) 0:01:14.907 ********** 2026-03-29 02:24:13.220178 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:13.220190 | orchestrator | 2026-03-29 02:24:13.220203 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-29 02:24:13.220215 | orchestrator | Sunday 29 March 2026 02:24:09 +0000 (0:00:00.885) 0:01:15.792 ********** 2026-03-29 02:24:13.220286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 02:24:13.220318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.220333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.220347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.220360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 02:24:13.220381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 02:24:13.932737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932812 | orchestrator | 2026-03-29 02:24:13.932828 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-29 02:24:13.932842 | orchestrator | Sunday 29 March 2026 02:24:13 +0000 (0:00:03.437) 0:01:19.230 ********** 2026-03-29 02:24:13.932855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 02:24:13.932868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:13.932907 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:13.932935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 02:24:19.994581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994737 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:19.994756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 02:24:19.994771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 02:24:19.994881 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:19.994895 | orchestrator | 2026-03-29 02:24:19.994911 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-29 02:24:19.994926 | orchestrator | Sunday 29 March 2026 02:24:14 +0000 (0:00:00.730) 0:01:19.960 ********** 2026-03-29 02:24:19.994941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.994990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.995006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:19.995020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.995034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.995047 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:19.995059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.995073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 02:24:19.995086 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:19.995099 | orchestrator | 2026-03-29 02:24:19.995114 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-29 02:24:19.995127 | orchestrator | Sunday 29 March 2026 02:24:15 +0000 (0:00:01.129) 0:01:21.089 ********** 2026-03-29 02:24:19.995140 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:19.995165 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:19.995179 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:19.995192 | orchestrator | 2026-03-29 02:24:19.995205 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-29 02:24:19.995218 | orchestrator | Sunday 29 March 2026 02:24:16 +0000 (0:00:01.305) 0:01:22.395 ********** 2026-03-29 02:24:19.995232 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:19.995246 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:19.995286 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:19.995301 | orchestrator | 2026-03-29 02:24:19.995315 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-29 02:24:19.995328 | orchestrator | Sunday 29 March 2026 02:24:18 +0000 (0:00:02.016) 0:01:24.412 ********** 2026-03-29 02:24:19.995342 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:19.995355 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:19.995367 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:19.995379 | orchestrator | 2026-03-29 02:24:19.995393 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-29 02:24:19.995407 | orchestrator | Sunday 29 March 2026 02:24:18 +0000 (0:00:00.299) 0:01:24.711 ********** 2026-03-29 02:24:19.995420 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:19.995433 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:19.995447 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:19.995460 | orchestrator | 2026-03-29 02:24:19.995473 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-29 02:24:19.995486 | orchestrator | Sunday 29 March 2026 02:24:19 +0000 (0:00:00.305) 0:01:25.016 ********** 2026-03-29 02:24:19.995499 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:19.995512 | orchestrator | 2026-03-29 02:24:19.995526 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-29 02:24:19.995548 | orchestrator | Sunday 29 March 2026 02:24:19 +0000 (0:00:00.905) 0:01:25.921 ********** 2026-03-29 02:24:23.163086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 02:24:23.163206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:23.163220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.163253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 02:24:23.163261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.163315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.163323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:23.163330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.163337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.164183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.164210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.164215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:23.164239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.018852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.018950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 02:24:24.018998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:24.019010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019106 | orchestrator | 2026-03-29 02:24:24.019117 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-29 02:24:24.019146 | orchestrator | Sunday 29 March 2026 02:24:23 +0000 (0:00:03.413) 0:01:29.334 ********** 2026-03-29 02:24:24.019156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 02:24:24.019165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:24.019175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.019200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402385 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:24.402392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 02:24:24.402398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:24.402704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402762 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:24.402769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 02:24:24.402776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 02:24:24.402783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 02:24:24.402804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 02:24:33.907911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 02:24:33.908029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:24:33.908056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 02:24:33.908078 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:33.908098 | orchestrator | 2026-03-29 02:24:33.908120 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-29 02:24:33.908140 | orchestrator | Sunday 29 March 2026 02:24:24 +0000 (0:00:00.992) 0:01:30.326 ********** 2026-03-29 02:24:33.908160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908192 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:33.908203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908225 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:33.908236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 02:24:33.908282 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:33.908358 | orchestrator | 2026-03-29 02:24:33.908373 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-29 02:24:33.908384 | orchestrator | Sunday 29 March 2026 02:24:25 +0000 (0:00:01.200) 0:01:31.526 ********** 2026-03-29 02:24:33.908395 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:33.908406 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:33.908418 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:33.908431 | orchestrator | 2026-03-29 02:24:33.908444 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-29 02:24:33.908456 | orchestrator | Sunday 29 March 2026 02:24:26 +0000 (0:00:01.322) 0:01:32.849 ********** 2026-03-29 02:24:33.908469 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:33.908481 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:33.908493 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:33.908506 | orchestrator | 2026-03-29 02:24:33.908518 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-29 02:24:33.908531 | orchestrator | Sunday 29 March 2026 02:24:28 +0000 (0:00:01.946) 0:01:34.796 ********** 2026-03-29 02:24:33.908563 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:33.908575 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:33.908585 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:33.908596 | orchestrator | 2026-03-29 02:24:33.908607 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-29 02:24:33.908618 | orchestrator | Sunday 29 March 2026 02:24:29 +0000 (0:00:00.292) 0:01:35.088 ********** 2026-03-29 02:24:33.908629 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:33.908639 | orchestrator | 2026-03-29 02:24:33.908650 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-29 02:24:33.908661 | orchestrator | Sunday 29 March 2026 02:24:30 +0000 (0:00:00.969) 0:01:36.057 ********** 2026-03-29 02:24:33.908681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 02:24:33.908698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:33.908746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 02:24:36.710220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:36.710479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 02:24:36.710538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:36.710574 | orchestrator | 2026-03-29 02:24:36.710593 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-29 02:24:36.710611 | orchestrator | Sunday 29 March 2026 02:24:34 +0000 (0:00:03.892) 0:01:39.949 ********** 2026-03-29 02:24:36.710638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 02:24:36.710673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:40.168685 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:40.168780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 02:24:40.168807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:40.168834 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:40.168858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 02:24:40.168869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 02:24:40.168883 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:40.168891 | orchestrator | 2026-03-29 02:24:40.168899 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-29 02:24:40.168908 | orchestrator | Sunday 29 March 2026 02:24:36 +0000 (0:00:02.794) 0:01:42.744 ********** 2026-03-29 02:24:40.168916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:40.168930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:48.162162 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:48.162244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:48.162263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:48.162278 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:48.162286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:48.162307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 02:24:48.162314 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:48.162398 | orchestrator | 2026-03-29 02:24:48.162411 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-29 02:24:48.162419 | orchestrator | Sunday 29 March 2026 02:24:40 +0000 (0:00:03.348) 0:01:46.093 ********** 2026-03-29 02:24:48.162445 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:48.162451 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:48.162457 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:48.162462 | orchestrator | 2026-03-29 02:24:48.162468 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-29 02:24:48.162473 | orchestrator | Sunday 29 March 2026 02:24:41 +0000 (0:00:01.383) 0:01:47.477 ********** 2026-03-29 02:24:48.162479 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:48.162485 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:48.162490 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:48.162496 | orchestrator | 2026-03-29 02:24:48.162502 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-29 02:24:48.162507 | orchestrator | Sunday 29 March 2026 02:24:43 +0000 (0:00:01.914) 0:01:49.391 ********** 2026-03-29 02:24:48.162513 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:48.162519 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:48.162525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:48.162531 | orchestrator | 2026-03-29 02:24:48.162537 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-29 02:24:48.162543 | orchestrator | Sunday 29 March 2026 02:24:43 +0000 (0:00:00.284) 0:01:49.676 ********** 2026-03-29 02:24:48.162549 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:48.162555 | orchestrator | 2026-03-29 02:24:48.162561 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-29 02:24:48.162568 | orchestrator | Sunday 29 March 2026 02:24:44 +0000 (0:00:01.003) 0:01:50.679 ********** 2026-03-29 02:24:48.162591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 02:24:48.162600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 02:24:48.162605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 02:24:48.162609 | orchestrator | 2026-03-29 02:24:48.162612 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-29 02:24:48.162617 | orchestrator | Sunday 29 March 2026 02:24:47 +0000 (0:00:02.827) 0:01:53.507 ********** 2026-03-29 02:24:48.162627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 02:24:48.162631 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:48.162635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 02:24:48.162639 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:48.162643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 02:24:48.162709 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:48.162724 | orchestrator | 2026-03-29 02:24:48.162730 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-29 02:24:48.162735 | orchestrator | Sunday 29 March 2026 02:24:47 +0000 (0:00:00.381) 0:01:53.889 ********** 2026-03-29 02:24:48.162742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:48.162757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:56.498274 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:56.498441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:56.498467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:56.498485 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:56.498498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:56.498513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 02:24:56.498553 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:56.498567 | orchestrator | 2026-03-29 02:24:56.498581 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-29 02:24:56.498597 | orchestrator | Sunday 29 March 2026 02:24:48 +0000 (0:00:00.837) 0:01:54.726 ********** 2026-03-29 02:24:56.498609 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:56.498621 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:56.498634 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:56.498646 | orchestrator | 2026-03-29 02:24:56.498661 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-29 02:24:56.498674 | orchestrator | Sunday 29 March 2026 02:24:50 +0000 (0:00:01.254) 0:01:55.980 ********** 2026-03-29 02:24:56.498688 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:24:56.498702 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:24:56.498714 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:24:56.498728 | orchestrator | 2026-03-29 02:24:56.498742 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-29 02:24:56.498771 | orchestrator | Sunday 29 March 2026 02:24:51 +0000 (0:00:01.933) 0:01:57.913 ********** 2026-03-29 02:24:56.498786 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:56.498800 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:56.498815 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:56.498830 | orchestrator | 2026-03-29 02:24:56.498844 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-29 02:24:56.498859 | orchestrator | Sunday 29 March 2026 02:24:52 +0000 (0:00:00.313) 0:01:58.227 ********** 2026-03-29 02:24:56.498874 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:24:56.498889 | orchestrator | 2026-03-29 02:24:56.498904 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-29 02:24:56.498919 | orchestrator | Sunday 29 March 2026 02:24:53 +0000 (0:00:01.071) 0:01:59.298 ********** 2026-03-29 02:24:56.498963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 02:24:56.499004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 02:24:56.499033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 02:24:58.112054 | orchestrator | 2026-03-29 02:24:58.112133 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-29 02:24:58.112145 | orchestrator | Sunday 29 March 2026 02:24:56 +0000 (0:00:03.122) 0:02:02.421 ********** 2026-03-29 02:24:58.112171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 02:24:58.112182 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:24:58.112204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 02:24:58.112230 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:24:58.112243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 02:24:58.112251 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:24:58.112258 | orchestrator | 2026-03-29 02:24:58.112265 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-29 02:24:58.112272 | orchestrator | Sunday 29 March 2026 02:24:57 +0000 (0:00:00.689) 0:02:03.111 ********** 2026-03-29 02:24:58.112285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:24:58.112309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:24:58.112324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:24:58.112392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:25:06.603575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 02:25:06.603691 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:06.603710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:25:06.603726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:25:06.603757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:25:06.603770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:25:06.603783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 02:25:06.603794 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:06.603806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:25:06.603817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:25:06.603828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 02:25:06.603864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 02:25:06.603876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 02:25:06.603888 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:06.603899 | orchestrator | 2026-03-29 02:25:06.603912 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-29 02:25:06.603924 | orchestrator | Sunday 29 March 2026 02:24:58 +0000 (0:00:00.926) 0:02:04.037 ********** 2026-03-29 02:25:06.603935 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:06.603946 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:06.603957 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:06.603968 | orchestrator | 2026-03-29 02:25:06.603979 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-29 02:25:06.603990 | orchestrator | Sunday 29 March 2026 02:24:59 +0000 (0:00:01.579) 0:02:05.617 ********** 2026-03-29 02:25:06.604001 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:06.604014 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:06.604028 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:06.604041 | orchestrator | 2026-03-29 02:25:06.604058 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-29 02:25:06.604078 | orchestrator | Sunday 29 March 2026 02:25:01 +0000 (0:00:02.056) 0:02:07.673 ********** 2026-03-29 02:25:06.604096 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:06.604117 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:06.604159 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:06.604181 | orchestrator | 2026-03-29 02:25:06.604202 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-29 02:25:06.604221 | orchestrator | Sunday 29 March 2026 02:25:02 +0000 (0:00:00.323) 0:02:07.996 ********** 2026-03-29 02:25:06.604234 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:06.604247 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:06.604260 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:06.604273 | orchestrator | 2026-03-29 02:25:06.604286 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-29 02:25:06.604298 | orchestrator | Sunday 29 March 2026 02:25:02 +0000 (0:00:00.305) 0:02:08.302 ********** 2026-03-29 02:25:06.604311 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:06.604324 | orchestrator | 2026-03-29 02:25:06.604336 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-29 02:25:06.604349 | orchestrator | Sunday 29 March 2026 02:25:03 +0000 (0:00:01.120) 0:02:09.422 ********** 2026-03-29 02:25:06.604408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 02:25:06.604439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 02:25:06.604453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:06.604466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:06.604487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:07.175063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:07.175151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 02:25:07.175189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:07.175206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:07.175221 | orchestrator | 2026-03-29 02:25:07.175237 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-29 02:25:07.175251 | orchestrator | Sunday 29 March 2026 02:25:06 +0000 (0:00:03.100) 0:02:12.523 ********** 2026-03-29 02:25:07.175287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 02:25:07.175312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:07.175324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:07.175341 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:07.175352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 02:25:07.175408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:07.175418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:07.175426 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:07.175456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 02:25:16.112095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 02:25:16.112211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 02:25:16.112231 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:16.112246 | orchestrator | 2026-03-29 02:25:16.112260 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-29 02:25:16.112274 | orchestrator | Sunday 29 March 2026 02:25:07 +0000 (0:00:00.570) 0:02:13.093 ********** 2026-03-29 02:25:16.112288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112318 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:16.112331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112357 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:16.112369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 02:25:16.112446 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:16.112489 | orchestrator | 2026-03-29 02:25:16.112503 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-29 02:25:16.112516 | orchestrator | Sunday 29 March 2026 02:25:08 +0000 (0:00:01.033) 0:02:14.127 ********** 2026-03-29 02:25:16.112528 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:16.112541 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:16.112582 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:16.112595 | orchestrator | 2026-03-29 02:25:16.112607 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-29 02:25:16.112619 | orchestrator | Sunday 29 March 2026 02:25:09 +0000 (0:00:01.315) 0:02:15.442 ********** 2026-03-29 02:25:16.112630 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:16.112642 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:16.112655 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:16.112668 | orchestrator | 2026-03-29 02:25:16.112681 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-29 02:25:16.112695 | orchestrator | Sunday 29 March 2026 02:25:11 +0000 (0:00:02.006) 0:02:17.449 ********** 2026-03-29 02:25:16.112708 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:16.112735 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:16.112750 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:16.112762 | orchestrator | 2026-03-29 02:25:16.112776 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-29 02:25:16.112808 | orchestrator | Sunday 29 March 2026 02:25:11 +0000 (0:00:00.299) 0:02:17.748 ********** 2026-03-29 02:25:16.112822 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:16.112834 | orchestrator | 2026-03-29 02:25:16.112847 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-29 02:25:16.112861 | orchestrator | Sunday 29 March 2026 02:25:12 +0000 (0:00:01.163) 0:02:18.912 ********** 2026-03-29 02:25:16.112876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 02:25:16.112894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:16.112910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 02:25:16.112933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 02:25:16.112956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:21.240159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:21.240297 | orchestrator | 2026-03-29 02:25:21.240318 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-29 02:25:21.240332 | orchestrator | Sunday 29 March 2026 02:25:16 +0000 (0:00:03.118) 0:02:22.031 ********** 2026-03-29 02:25:21.240347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 02:25:21.240491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:21.240535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:21.240555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 02:25:21.240588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:21.240601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:21.240612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 02:25:21.240624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:25:21.240644 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:21.240655 | orchestrator | 2026-03-29 02:25:21.240667 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-29 02:25:21.240678 | orchestrator | Sunday 29 March 2026 02:25:16 +0000 (0:00:00.627) 0:02:22.658 ********** 2026-03-29 02:25:21.240689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240714 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:21.240725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240747 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:21.240758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 02:25:21.240780 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:21.240791 | orchestrator | 2026-03-29 02:25:21.240806 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-29 02:25:21.240818 | orchestrator | Sunday 29 March 2026 02:25:17 +0000 (0:00:00.871) 0:02:23.530 ********** 2026-03-29 02:25:21.240829 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:21.240840 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:21.240851 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:21.240861 | orchestrator | 2026-03-29 02:25:21.240872 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-29 02:25:21.240883 | orchestrator | Sunday 29 March 2026 02:25:19 +0000 (0:00:01.604) 0:02:25.134 ********** 2026-03-29 02:25:21.240894 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:21.240905 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:21.240915 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:21.240926 | orchestrator | 2026-03-29 02:25:21.240937 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-29 02:25:21.240954 | orchestrator | Sunday 29 March 2026 02:25:21 +0000 (0:00:02.022) 0:02:27.157 ********** 2026-03-29 02:25:25.534327 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:25.534499 | orchestrator | 2026-03-29 02:25:25.534518 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-29 02:25:25.534530 | orchestrator | Sunday 29 March 2026 02:25:22 +0000 (0:00:01.033) 0:02:28.191 ********** 2026-03-29 02:25:25.534543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 02:25:25.534584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 02:25:25.534663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 02:25:25.534703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:25.534747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455591 | orchestrator | 2026-03-29 02:25:26.455684 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-29 02:25:26.455697 | orchestrator | Sunday 29 March 2026 02:25:25 +0000 (0:00:03.353) 0:02:31.544 ********** 2026-03-29 02:25:26.455726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 02:25:26.455738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455763 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:26.455784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 02:25:26.455808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455837 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:26.455845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 02:25:26.455852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 02:25:26.455879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 02:25:37.585824 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:37.585900 | orchestrator | 2026-03-29 02:25:37.585908 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-29 02:25:37.585914 | orchestrator | Sunday 29 March 2026 02:25:26 +0000 (0:00:00.927) 0:02:32.471 ********** 2026-03-29 02:25:37.585919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585932 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:37.585936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585944 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:37.585948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 02:25:37.585955 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:37.585959 | orchestrator | 2026-03-29 02:25:37.585963 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-29 02:25:37.585967 | orchestrator | Sunday 29 March 2026 02:25:27 +0000 (0:00:00.876) 0:02:33.348 ********** 2026-03-29 02:25:37.585971 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:37.585975 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:37.585978 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:37.585982 | orchestrator | 2026-03-29 02:25:37.585986 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-29 02:25:37.585990 | orchestrator | Sunday 29 March 2026 02:25:28 +0000 (0:00:01.327) 0:02:34.675 ********** 2026-03-29 02:25:37.585993 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:37.585997 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:37.586001 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:37.586004 | orchestrator | 2026-03-29 02:25:37.586008 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-29 02:25:37.586067 | orchestrator | Sunday 29 March 2026 02:25:30 +0000 (0:00:02.087) 0:02:36.763 ********** 2026-03-29 02:25:37.586073 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:37.586077 | orchestrator | 2026-03-29 02:25:37.586080 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-29 02:25:37.586084 | orchestrator | Sunday 29 March 2026 02:25:32 +0000 (0:00:01.309) 0:02:38.072 ********** 2026-03-29 02:25:37.586088 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 02:25:37.586092 | orchestrator | 2026-03-29 02:25:37.586096 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-29 02:25:37.586112 | orchestrator | Sunday 29 March 2026 02:25:35 +0000 (0:00:03.158) 0:02:41.231 ********** 2026-03-29 02:25:37.586136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:37.586143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:37.586151 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:37.586158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:37.586166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:37.586170 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:37.586178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:40.019291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:40.019372 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:40.019382 | orchestrator | 2026-03-29 02:25:40.019389 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-29 02:25:40.019396 | orchestrator | Sunday 29 March 2026 02:25:37 +0000 (0:00:02.271) 0:02:43.502 ********** 2026-03-29 02:25:40.019484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:40.019495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:40.019501 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:40.019520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:40.019538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:40.019544 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:40.019550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:25:40.019561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 02:25:49.508079 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:49.508220 | orchestrator | 2026-03-29 02:25:49.508251 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-29 02:25:49.508270 | orchestrator | Sunday 29 March 2026 02:25:40 +0000 (0:00:02.440) 0:02:45.943 ********** 2026-03-29 02:25:49.508290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508380 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:49.508400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508435 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:49.508483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 02:25:49.508519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:49.508536 | orchestrator | 2026-03-29 02:25:49.508553 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-29 02:25:49.508570 | orchestrator | Sunday 29 March 2026 02:25:42 +0000 (0:00:02.887) 0:02:48.831 ********** 2026-03-29 02:25:49.508587 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:25:49.508644 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:25:49.508664 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:25:49.508681 | orchestrator | 2026-03-29 02:25:49.508698 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-29 02:25:49.508714 | orchestrator | Sunday 29 March 2026 02:25:44 +0000 (0:00:02.005) 0:02:50.836 ********** 2026-03-29 02:25:49.508730 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:49.508745 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:49.508761 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:49.508777 | orchestrator | 2026-03-29 02:25:49.508793 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-29 02:25:49.508812 | orchestrator | Sunday 29 March 2026 02:25:46 +0000 (0:00:01.349) 0:02:52.186 ********** 2026-03-29 02:25:49.508828 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:49.508844 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:49.508861 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:49.508879 | orchestrator | 2026-03-29 02:25:49.508897 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-29 02:25:49.508915 | orchestrator | Sunday 29 March 2026 02:25:46 +0000 (0:00:00.292) 0:02:52.479 ********** 2026-03-29 02:25:49.508932 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:49.508949 | orchestrator | 2026-03-29 02:25:49.508968 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-29 02:25:49.508987 | orchestrator | Sunday 29 March 2026 02:25:47 +0000 (0:00:01.297) 0:02:53.777 ********** 2026-03-29 02:25:49.509017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 02:25:49.509039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 02:25:49.509056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 02:25:49.509073 | orchestrator | 2026-03-29 02:25:49.509090 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-29 02:25:49.509122 | orchestrator | Sunday 29 March 2026 02:25:49 +0000 (0:00:01.468) 0:02:55.245 ********** 2026-03-29 02:25:49.509155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 02:25:57.582605 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:57.582719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 02:25:57.582739 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:57.582752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 02:25:57.582764 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:57.582775 | orchestrator | 2026-03-29 02:25:57.582787 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-29 02:25:57.582800 | orchestrator | Sunday 29 March 2026 02:25:49 +0000 (0:00:00.368) 0:02:55.614 ********** 2026-03-29 02:25:57.582812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 02:25:57.582825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 02:25:57.582837 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:57.582848 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:57.582859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 02:25:57.582893 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:57.582904 | orchestrator | 2026-03-29 02:25:57.582957 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-29 02:25:57.582969 | orchestrator | Sunday 29 March 2026 02:25:50 +0000 (0:00:00.818) 0:02:56.432 ********** 2026-03-29 02:25:57.582980 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:57.582991 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:57.583001 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:57.583012 | orchestrator | 2026-03-29 02:25:57.583023 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-29 02:25:57.583034 | orchestrator | Sunday 29 March 2026 02:25:50 +0000 (0:00:00.429) 0:02:56.861 ********** 2026-03-29 02:25:57.583044 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:57.583055 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:57.583066 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:57.583076 | orchestrator | 2026-03-29 02:25:57.583087 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-29 02:25:57.583098 | orchestrator | Sunday 29 March 2026 02:25:52 +0000 (0:00:01.246) 0:02:58.107 ********** 2026-03-29 02:25:57.583108 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:57.583119 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:57.583129 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:25:57.583140 | orchestrator | 2026-03-29 02:25:57.583150 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-29 02:25:57.583161 | orchestrator | Sunday 29 March 2026 02:25:52 +0000 (0:00:00.369) 0:02:58.477 ********** 2026-03-29 02:25:57.583172 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:25:57.583183 | orchestrator | 2026-03-29 02:25:57.583193 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-29 02:25:57.583204 | orchestrator | Sunday 29 March 2026 02:25:53 +0000 (0:00:01.416) 0:02:59.894 ********** 2026-03-29 02:25:57.583234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 02:25:57.583253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.583265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.583288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.583300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:57.583321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.681234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.681313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.681320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.681338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:57.681344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.681349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:25:57.681364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.681372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 02:25:57.681378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.681387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.681393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:25:57.681400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:57.681407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.782979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 02:25:57.783094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:57.783135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.783209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:57.783220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.783230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.783252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.898669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.898772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:57.898790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.898803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.898816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.898829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:25:57.898904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:57.898919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:57.898931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.898944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:57.898956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:25:57.898968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:25:57.899001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:58.970377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:58.970445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:58.970452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:25:58.970458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:58.970462 | orchestrator | 2026-03-29 02:25:58.970504 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-29 02:25:58.970524 | orchestrator | Sunday 29 March 2026 02:25:57 +0000 (0:00:04.018) 0:03:03.912 ********** 2026-03-29 02:25:58.970539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 02:25:58.970555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:58.970560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:58.970565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:58.970569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:58.970580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:58.970590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.064911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.064985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 02:25:59.064993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.064999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.065030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:59.065047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.065052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.065057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.065062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:25:59.065067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:59.065079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.065091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.141667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:25:59.141701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.141712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:59.141720 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:25:59.141760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 02:25:59.141772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:59.141805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.141815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.323780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 02:25:59.323910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:25:59.323949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.323958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.323966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.323977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.323996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:25:59.324004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:25:59.324011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.324022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:59.324032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:25:59.324038 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:25:59.324046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 02:25:59.324057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 02:26:08.839229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 02:26:08.839377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 02:26:08.839446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 02:26:08.839556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 02:26:08.839578 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:08.839593 | orchestrator | 2026-03-29 02:26:08.839606 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-29 02:26:08.839619 | orchestrator | Sunday 29 March 2026 02:25:59 +0000 (0:00:01.340) 0:03:05.252 ********** 2026-03-29 02:26:08.839631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839682 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:08.839694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839720 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:08.839753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 02:26:08.839791 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:08.839804 | orchestrator | 2026-03-29 02:26:08.839818 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-29 02:26:08.839831 | orchestrator | Sunday 29 March 2026 02:26:00 +0000 (0:00:01.594) 0:03:06.847 ********** 2026-03-29 02:26:08.839844 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:08.839858 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:08.839872 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:08.839883 | orchestrator | 2026-03-29 02:26:08.839894 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-29 02:26:08.839905 | orchestrator | Sunday 29 March 2026 02:26:02 +0000 (0:00:01.304) 0:03:08.152 ********** 2026-03-29 02:26:08.839916 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:08.839941 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:08.839953 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:08.839964 | orchestrator | 2026-03-29 02:26:08.839975 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-29 02:26:08.839985 | orchestrator | Sunday 29 March 2026 02:26:04 +0000 (0:00:02.020) 0:03:10.172 ********** 2026-03-29 02:26:08.839996 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:26:08.840007 | orchestrator | 2026-03-29 02:26:08.840018 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-29 02:26:08.840029 | orchestrator | Sunday 29 March 2026 02:26:05 +0000 (0:00:01.224) 0:03:11.396 ********** 2026-03-29 02:26:08.840042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:08.840062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:08.840074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:08.840092 | orchestrator | 2026-03-29 02:26:08.840104 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-29 02:26:08.840122 | orchestrator | Sunday 29 March 2026 02:26:08 +0000 (0:00:03.362) 0:03:14.759 ********** 2026-03-29 02:26:19.313238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 02:26:19.313361 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:19.313390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 02:26:19.391815 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:19.391900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 02:26:19.391908 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:19.391913 | orchestrator | 2026-03-29 02:26:19.391919 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-29 02:26:19.391924 | orchestrator | Sunday 29 March 2026 02:26:09 +0000 (0:00:00.506) 0:03:15.265 ********** 2026-03-29 02:26:19.391930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.391955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.391961 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:19.391965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.391969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.391972 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:19.391997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.392001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 02:26:19.392005 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:19.392009 | orchestrator | 2026-03-29 02:26:19.392013 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-29 02:26:19.392017 | orchestrator | Sunday 29 March 2026 02:26:10 +0000 (0:00:00.744) 0:03:16.010 ********** 2026-03-29 02:26:19.392021 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:19.392024 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:19.392028 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:19.392032 | orchestrator | 2026-03-29 02:26:19.392036 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-29 02:26:19.392039 | orchestrator | Sunday 29 March 2026 02:26:12 +0000 (0:00:01.941) 0:03:17.951 ********** 2026-03-29 02:26:19.392043 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:19.392047 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:19.392050 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:19.392054 | orchestrator | 2026-03-29 02:26:19.392058 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-29 02:26:19.392062 | orchestrator | Sunday 29 March 2026 02:26:13 +0000 (0:00:01.829) 0:03:19.781 ********** 2026-03-29 02:26:19.392066 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:26:19.392070 | orchestrator | 2026-03-29 02:26:19.392074 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-29 02:26:19.392077 | orchestrator | Sunday 29 March 2026 02:26:15 +0000 (0:00:01.512) 0:03:21.294 ********** 2026-03-29 02:26:19.392084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:19.392097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:19.392102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:19.392111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:20.428631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 02:26:20.428725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428771 | orchestrator | 2026-03-29 02:26:20.428779 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-29 02:26:20.428786 | orchestrator | Sunday 29 March 2026 02:26:19 +0000 (0:00:03.942) 0:03:25.236 ********** 2026-03-29 02:26:20.428807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 02:26:20.428819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:20.428835 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:20.428842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 02:26:20.428853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:30.814861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:30.814984 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:30.815022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 02:26:30.815061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 02:26:30.815074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 02:26:30.815086 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:30.815097 | orchestrator | 2026-03-29 02:26:30.815110 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-29 02:26:30.815123 | orchestrator | Sunday 29 March 2026 02:26:20 +0000 (0:00:01.114) 0:03:26.350 ********** 2026-03-29 02:26:30.815136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815209 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:30.815220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815272 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:30.815283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 02:26:30.815333 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:30.815344 | orchestrator | 2026-03-29 02:26:30.815356 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-29 02:26:30.815367 | orchestrator | Sunday 29 March 2026 02:26:21 +0000 (0:00:00.855) 0:03:27.206 ********** 2026-03-29 02:26:30.815377 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:30.815388 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:30.815399 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:30.815409 | orchestrator | 2026-03-29 02:26:30.815421 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-29 02:26:30.815431 | orchestrator | Sunday 29 March 2026 02:26:22 +0000 (0:00:01.396) 0:03:28.603 ********** 2026-03-29 02:26:30.815442 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:30.815453 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:30.815463 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:30.815474 | orchestrator | 2026-03-29 02:26:30.815485 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-29 02:26:30.815496 | orchestrator | Sunday 29 March 2026 02:26:24 +0000 (0:00:02.014) 0:03:30.618 ********** 2026-03-29 02:26:30.815506 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:26:30.815517 | orchestrator | 2026-03-29 02:26:30.815562 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-29 02:26:30.815575 | orchestrator | Sunday 29 March 2026 02:26:26 +0000 (0:00:01.556) 0:03:32.174 ********** 2026-03-29 02:26:30.815587 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-29 02:26:30.815599 | orchestrator | 2026-03-29 02:26:30.815610 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-29 02:26:30.815620 | orchestrator | Sunday 29 March 2026 02:26:27 +0000 (0:00:00.827) 0:03:33.002 ********** 2026-03-29 02:26:30.815633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 02:26:30.815661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 02:26:41.791426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 02:26:41.791540 | orchestrator | 2026-03-29 02:26:41.791596 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-29 02:26:41.791618 | orchestrator | Sunday 29 March 2026 02:26:30 +0000 (0:00:03.738) 0:03:36.740 ********** 2026-03-29 02:26:41.791640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.791661 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:41.791717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.791739 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:41.791759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.791778 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:41.791795 | orchestrator | 2026-03-29 02:26:41.791815 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-29 02:26:41.791835 | orchestrator | Sunday 29 March 2026 02:26:31 +0000 (0:00:01.194) 0:03:37.935 ********** 2026-03-29 02:26:41.791856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.791879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.791929 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:41.791943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.791955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.791968 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:41.791982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.791996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 02:26:41.792028 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:41.792042 | orchestrator | 2026-03-29 02:26:41.792054 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 02:26:41.792067 | orchestrator | Sunday 29 March 2026 02:26:33 +0000 (0:00:01.333) 0:03:39.269 ********** 2026-03-29 02:26:41.792079 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:41.792092 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:41.792104 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:41.792117 | orchestrator | 2026-03-29 02:26:41.792129 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 02:26:41.792142 | orchestrator | Sunday 29 March 2026 02:26:35 +0000 (0:00:02.197) 0:03:41.466 ********** 2026-03-29 02:26:41.792156 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:26:41.792168 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:26:41.792180 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:26:41.792193 | orchestrator | 2026-03-29 02:26:41.792205 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-29 02:26:41.792216 | orchestrator | Sunday 29 March 2026 02:26:38 +0000 (0:00:02.789) 0:03:44.256 ********** 2026-03-29 02:26:41.792227 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-29 02:26:41.792239 | orchestrator | 2026-03-29 02:26:41.792250 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-29 02:26:41.792261 | orchestrator | Sunday 29 March 2026 02:26:39 +0000 (0:00:01.048) 0:03:45.305 ********** 2026-03-29 02:26:41.792281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.792293 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:41.792304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.792324 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:26:41.792335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.792347 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:26:41.792357 | orchestrator | 2026-03-29 02:26:41.792368 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-29 02:26:41.792379 | orchestrator | Sunday 29 March 2026 02:26:40 +0000 (0:00:01.000) 0:03:46.306 ********** 2026-03-29 02:26:41.792390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.792402 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:26:41.792413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:26:41.792431 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:04.264782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 02:27:04.264884 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:04.264899 | orchestrator | 2026-03-29 02:27:04.264910 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-29 02:27:04.264921 | orchestrator | Sunday 29 March 2026 02:26:41 +0000 (0:00:01.406) 0:03:47.712 ********** 2026-03-29 02:27:04.264931 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:04.264940 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:04.264949 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:04.264958 | orchestrator | 2026-03-29 02:27:04.264967 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 02:27:04.264976 | orchestrator | Sunday 29 March 2026 02:26:43 +0000 (0:00:01.449) 0:03:49.162 ********** 2026-03-29 02:27:04.264985 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:27:04.264995 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:27:04.265003 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:27:04.265012 | orchestrator | 2026-03-29 02:27:04.265021 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 02:27:04.265030 | orchestrator | Sunday 29 March 2026 02:26:45 +0000 (0:00:02.657) 0:03:51.819 ********** 2026-03-29 02:27:04.265062 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:27:04.265072 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:27:04.265080 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:27:04.265089 | orchestrator | 2026-03-29 02:27:04.265115 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-29 02:27:04.265132 | orchestrator | Sunday 29 March 2026 02:26:48 +0000 (0:00:02.629) 0:03:54.448 ********** 2026-03-29 02:27:04.265152 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-29 02:27:04.265175 | orchestrator | 2026-03-29 02:27:04.265188 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-29 02:27:04.265202 | orchestrator | Sunday 29 March 2026 02:26:49 +0000 (0:00:01.134) 0:03:55.583 ********** 2026-03-29 02:27:04.265216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265231 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:04.265245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265259 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:04.265274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265288 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:04.265303 | orchestrator | 2026-03-29 02:27:04.265316 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-29 02:27:04.265331 | orchestrator | Sunday 29 March 2026 02:26:50 +0000 (0:00:01.221) 0:03:56.804 ********** 2026-03-29 02:27:04.265368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265384 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:04.265400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265429 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:04.265445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 02:27:04.265461 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:04.265476 | orchestrator | 2026-03-29 02:27:04.265505 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-29 02:27:04.265523 | orchestrator | Sunday 29 March 2026 02:26:52 +0000 (0:00:01.338) 0:03:58.142 ********** 2026-03-29 02:27:04.265537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:04.265553 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:04.265568 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:04.265582 | orchestrator | 2026-03-29 02:27:04.265667 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 02:27:04.265683 | orchestrator | Sunday 29 March 2026 02:26:53 +0000 (0:00:01.716) 0:03:59.858 ********** 2026-03-29 02:27:04.265698 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:27:04.265713 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:27:04.265722 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:27:04.265731 | orchestrator | 2026-03-29 02:27:04.265740 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 02:27:04.265749 | orchestrator | Sunday 29 March 2026 02:26:56 +0000 (0:00:02.286) 0:04:02.145 ********** 2026-03-29 02:27:04.265757 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:27:04.265766 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:27:04.265775 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:27:04.265783 | orchestrator | 2026-03-29 02:27:04.265792 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-29 02:27:04.265801 | orchestrator | Sunday 29 March 2026 02:26:59 +0000 (0:00:03.201) 0:04:05.347 ********** 2026-03-29 02:27:04.265809 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:27:04.265818 | orchestrator | 2026-03-29 02:27:04.265827 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-29 02:27:04.265835 | orchestrator | Sunday 29 March 2026 02:27:01 +0000 (0:00:01.651) 0:04:06.998 ********** 2026-03-29 02:27:04.265846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:04.265856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:04.265891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.975916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.976028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:04.976045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:04.976057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:04.976068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.976109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.976148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:04.976167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:04.976186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:04.976203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.976255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:04.976276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:04.976287 | orchestrator | 2026-03-29 02:27:04.976299 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-29 02:27:04.976309 | orchestrator | Sunday 29 March 2026 02:27:04 +0000 (0:00:03.332) 0:04:10.331 ********** 2026-03-29 02:27:04.976330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 02:27:05.129215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:05.129354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:05.129383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:05.129404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:05.129450 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:05.129473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 02:27:05.129492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:05.129548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:05.129570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:05.129589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:05.129814 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:05.129842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 02:27:05.129862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 02:27:05.129883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 02:27:05.129935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 02:27:16.843990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 02:27:16.844090 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:16.844103 | orchestrator | 2026-03-29 02:27:16.844113 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-29 02:27:16.844124 | orchestrator | Sunday 29 March 2026 02:27:05 +0000 (0:00:00.725) 0:04:11.056 ********** 2026-03-29 02:27:16.844134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844180 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:16.844188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844204 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:16.844212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 02:27:16.844228 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:16.844235 | orchestrator | 2026-03-29 02:27:16.844241 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-29 02:27:16.844248 | orchestrator | Sunday 29 March 2026 02:27:06 +0000 (0:00:00.890) 0:04:11.947 ********** 2026-03-29 02:27:16.844254 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:27:16.844259 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:27:16.844265 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:27:16.844270 | orchestrator | 2026-03-29 02:27:16.844277 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-29 02:27:16.844283 | orchestrator | Sunday 29 March 2026 02:27:07 +0000 (0:00:01.752) 0:04:13.699 ********** 2026-03-29 02:27:16.844290 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:27:16.844296 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:27:16.844303 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:27:16.844310 | orchestrator | 2026-03-29 02:27:16.844317 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-29 02:27:16.844323 | orchestrator | Sunday 29 March 2026 02:27:10 +0000 (0:00:02.272) 0:04:15.972 ********** 2026-03-29 02:27:16.844330 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:27:16.844338 | orchestrator | 2026-03-29 02:27:16.844345 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-29 02:27:16.844352 | orchestrator | Sunday 29 March 2026 02:27:11 +0000 (0:00:01.504) 0:04:17.477 ********** 2026-03-29 02:27:16.844373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:27:16.844398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:27:16.844413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:27:16.844422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:27:16.844433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:27:16.844447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:27:18.804507 | orchestrator | 2026-03-29 02:27:18.804603 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-29 02:27:18.804674 | orchestrator | Sunday 29 March 2026 02:27:16 +0000 (0:00:05.280) 0:04:22.757 ********** 2026-03-29 02:27:18.804691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:27:18.804706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:27:18.804717 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:18.804745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:27:18.804756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:27:18.804804 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:18.804815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:27:18.804825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:27:18.804834 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:18.804844 | orchestrator | 2026-03-29 02:27:18.804853 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-29 02:27:18.804863 | orchestrator | Sunday 29 March 2026 02:27:17 +0000 (0:00:01.042) 0:04:23.799 ********** 2026-03-29 02:27:18.804873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 02:27:18.804883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:18.804895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:18.804913 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:18.804927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 02:27:18.804936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:18.804945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:18.804954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:18.804989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 02:27:18.804998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:18.805020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 02:27:24.801805 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:24.801953 | orchestrator | 2026-03-29 02:27:24.801973 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-29 02:27:24.801986 | orchestrator | Sunday 29 March 2026 02:27:18 +0000 (0:00:00.922) 0:04:24.722 ********** 2026-03-29 02:27:24.801998 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:24.802010 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:24.802079 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:24.802090 | orchestrator | 2026-03-29 02:27:24.802102 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-29 02:27:24.802114 | orchestrator | Sunday 29 March 2026 02:27:19 +0000 (0:00:00.427) 0:04:25.149 ********** 2026-03-29 02:27:24.802127 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:24.802140 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:24.802154 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:24.802165 | orchestrator | 2026-03-29 02:27:24.802178 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-29 02:27:24.802204 | orchestrator | Sunday 29 March 2026 02:27:20 +0000 (0:00:01.449) 0:04:26.599 ********** 2026-03-29 02:27:24.802228 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:27:24.802242 | orchestrator | 2026-03-29 02:27:24.802256 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-29 02:27:24.802268 | orchestrator | Sunday 29 March 2026 02:27:22 +0000 (0:00:01.694) 0:04:28.294 ********** 2026-03-29 02:27:24.802284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 02:27:24.802326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:24.802353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:24.802366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:24.802379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:24.802412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 02:27:24.802426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 02:27:24.802437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:24.802457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:24.802474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:24.802487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:24.802499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:24.802520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:26.449260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:26.449372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:26.449419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 02:27:26.449454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:26.449472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:26.449487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:26.449518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:26.449533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 02:27:26.449557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:26.449577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:26.449593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 02:27:26.449617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.128262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:27.128398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:27.128416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.128443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.128455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:27.128467 | orchestrator | 2026-03-29 02:27:27.128481 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-29 02:27:27.128493 | orchestrator | Sunday 29 March 2026 02:27:26 +0000 (0:00:04.223) 0:04:32.517 ********** 2026-03-29 02:27:27.128505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 02:27:27.128536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:27.128555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.128567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.128580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:27.128599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 02:27:27.128613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:27.128706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 02:27:27.273827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:27.273915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.273938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.273945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.273952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:27.273958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.273966 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:27.273974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:27.274057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 02:27:27.274073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:27.274092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.274104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 02:27:27.274115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:27.274131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 02:27:27.274144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:28.712002 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:28.712129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:28.712150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:28.712183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 02:27:28.712198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 02:27:28.712213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 02:27:28.712281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:28.712317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 02:27:28.712329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 02:27:28.712341 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:28.712353 | orchestrator | 2026-03-29 02:27:28.712365 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-29 02:27:28.712378 | orchestrator | Sunday 29 March 2026 02:27:27 +0000 (0:00:00.809) 0:04:33.327 ********** 2026-03-29 02:27:28.712396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:28.712439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:28.712452 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:28.712464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 02:27:28.712522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:28.712535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:28.712548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:28.712562 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:28.712582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 02:27:35.563152 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:35.563278 | orchestrator | 2026-03-29 02:27:35.563302 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-29 02:27:35.563317 | orchestrator | Sunday 29 March 2026 02:27:28 +0000 (0:00:01.307) 0:04:34.635 ********** 2026-03-29 02:27:35.563330 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:35.563344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:35.563357 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:35.563372 | orchestrator | 2026-03-29 02:27:35.563386 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-29 02:27:35.563401 | orchestrator | Sunday 29 March 2026 02:27:29 +0000 (0:00:00.379) 0:04:35.014 ********** 2026-03-29 02:27:35.563415 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:35.563430 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:35.563445 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:35.563459 | orchestrator | 2026-03-29 02:27:35.563474 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-29 02:27:35.563489 | orchestrator | Sunday 29 March 2026 02:27:30 +0000 (0:00:01.079) 0:04:36.093 ********** 2026-03-29 02:27:35.563503 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:27:35.563517 | orchestrator | 2026-03-29 02:27:35.563531 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-29 02:27:35.563546 | orchestrator | Sunday 29 March 2026 02:27:31 +0000 (0:00:01.553) 0:04:37.647 ********** 2026-03-29 02:27:35.563564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:27:35.563616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:27:35.563717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:27:35.563735 | orchestrator | 2026-03-29 02:27:35.563746 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-29 02:27:35.563778 | orchestrator | Sunday 29 March 2026 02:27:33 +0000 (0:00:02.039) 0:04:39.686 ********** 2026-03-29 02:27:35.563790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 02:27:35.563814 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:35.563826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 02:27:35.563836 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:35.563846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 02:27:35.563857 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:35.563866 | orchestrator | 2026-03-29 02:27:35.563877 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-29 02:27:35.563886 | orchestrator | Sunday 29 March 2026 02:27:34 +0000 (0:00:00.431) 0:04:40.118 ********** 2026-03-29 02:27:35.563897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 02:27:35.563909 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:35.563920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 02:27:35.563930 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:35.563940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 02:27:35.563950 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:35.563960 | orchestrator | 2026-03-29 02:27:35.563970 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-29 02:27:35.563980 | orchestrator | Sunday 29 March 2026 02:27:35 +0000 (0:00:00.922) 0:04:41.040 ********** 2026-03-29 02:27:35.563996 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:45.533770 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:45.533885 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:45.533900 | orchestrator | 2026-03-29 02:27:45.533914 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-29 02:27:45.533927 | orchestrator | Sunday 29 March 2026 02:27:35 +0000 (0:00:00.451) 0:04:41.492 ********** 2026-03-29 02:27:45.533938 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:45.533976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:45.533988 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:45.533999 | orchestrator | 2026-03-29 02:27:45.534010 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-29 02:27:45.534082 | orchestrator | Sunday 29 March 2026 02:27:36 +0000 (0:00:01.277) 0:04:42.769 ********** 2026-03-29 02:27:45.534094 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:27:45.534106 | orchestrator | 2026-03-29 02:27:45.534117 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-29 02:27:45.534127 | orchestrator | Sunday 29 March 2026 02:27:38 +0000 (0:00:01.528) 0:04:44.298 ********** 2026-03-29 02:27:45.534155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 02:27:45.534274 | orchestrator | 2026-03-29 02:27:45.534287 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-29 02:27:45.534300 | orchestrator | Sunday 29 March 2026 02:27:44 +0000 (0:00:06.525) 0:04:50.824 ********** 2026-03-29 02:27:45.534314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 02:27:45.534336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 02:27:51.305374 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:51.305496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 02:27:51.305509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 02:27:51.305517 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:51.305523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 02:27:51.305529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 02:27:51.305549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:51.305555 | orchestrator | 2026-03-29 02:27:51.305561 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-29 02:27:51.305568 | orchestrator | Sunday 29 March 2026 02:27:45 +0000 (0:00:00.638) 0:04:51.463 ********** 2026-03-29 02:27:51.305587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305618 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:51.305623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305644 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:51.305650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 02:27:51.305670 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:51.305730 | orchestrator | 2026-03-29 02:27:51.305741 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-29 02:27:51.305746 | orchestrator | Sunday 29 March 2026 02:27:46 +0000 (0:00:00.972) 0:04:52.436 ********** 2026-03-29 02:27:51.305751 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:27:51.305756 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:27:51.305761 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:27:51.305766 | orchestrator | 2026-03-29 02:27:51.305772 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-29 02:27:51.305777 | orchestrator | Sunday 29 March 2026 02:27:47 +0000 (0:00:01.349) 0:04:53.785 ********** 2026-03-29 02:27:51.305782 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:27:51.305787 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:27:51.305792 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:27:51.305797 | orchestrator | 2026-03-29 02:27:51.305803 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-29 02:27:51.305808 | orchestrator | Sunday 29 March 2026 02:27:50 +0000 (0:00:02.191) 0:04:55.976 ********** 2026-03-29 02:27:51.305813 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:51.305818 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:51.305823 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:51.305828 | orchestrator | 2026-03-29 02:27:51.305834 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-29 02:27:51.305839 | orchestrator | Sunday 29 March 2026 02:27:50 +0000 (0:00:00.635) 0:04:56.612 ********** 2026-03-29 02:27:51.305844 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:51.305849 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:27:51.305854 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:27:51.305859 | orchestrator | 2026-03-29 02:27:51.305865 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-29 02:27:51.305870 | orchestrator | Sunday 29 March 2026 02:27:50 +0000 (0:00:00.306) 0:04:56.919 ********** 2026-03-29 02:27:51.305875 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:27:51.305884 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.548269 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.548397 | orchestrator | 2026-03-29 02:28:35.548416 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-29 02:28:35.548462 | orchestrator | Sunday 29 March 2026 02:27:51 +0000 (0:00:00.317) 0:04:57.236 ********** 2026-03-29 02:28:35.548474 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.548485 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.548495 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.548505 | orchestrator | 2026-03-29 02:28:35.548515 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-29 02:28:35.548525 | orchestrator | Sunday 29 March 2026 02:27:51 +0000 (0:00:00.330) 0:04:57.567 ********** 2026-03-29 02:28:35.548535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.548545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.548555 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.548564 | orchestrator | 2026-03-29 02:28:35.548574 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-29 02:28:35.548598 | orchestrator | Sunday 29 March 2026 02:27:52 +0000 (0:00:00.606) 0:04:58.173 ********** 2026-03-29 02:28:35.548609 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.548619 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.548629 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.548639 | orchestrator | 2026-03-29 02:28:35.548649 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-29 02:28:35.548659 | orchestrator | Sunday 29 March 2026 02:27:52 +0000 (0:00:00.521) 0:04:58.694 ********** 2026-03-29 02:28:35.548669 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.548679 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.548689 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.548699 | orchestrator | 2026-03-29 02:28:35.548709 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-29 02:28:35.548739 | orchestrator | Sunday 29 March 2026 02:27:53 +0000 (0:00:00.642) 0:04:59.337 ********** 2026-03-29 02:28:35.548773 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.548784 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.548794 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.548803 | orchestrator | 2026-03-29 02:28:35.548813 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-29 02:28:35.548823 | orchestrator | Sunday 29 March 2026 02:27:54 +0000 (0:00:00.643) 0:04:59.981 ********** 2026-03-29 02:28:35.548832 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.548842 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.548851 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.548861 | orchestrator | 2026-03-29 02:28:35.548870 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-29 02:28:35.548880 | orchestrator | Sunday 29 March 2026 02:27:54 +0000 (0:00:00.932) 0:05:00.913 ********** 2026-03-29 02:28:35.548890 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.548899 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.548908 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.548918 | orchestrator | 2026-03-29 02:28:35.548928 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-29 02:28:35.548938 | orchestrator | Sunday 29 March 2026 02:27:55 +0000 (0:00:00.911) 0:05:01.825 ********** 2026-03-29 02:28:35.548947 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.548957 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.548966 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.548976 | orchestrator | 2026-03-29 02:28:35.548986 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-29 02:28:35.548996 | orchestrator | Sunday 29 March 2026 02:27:56 +0000 (0:00:00.867) 0:05:02.692 ********** 2026-03-29 02:28:35.549014 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:28:35.549030 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:28:35.549046 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:28:35.549062 | orchestrator | 2026-03-29 02:28:35.549078 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-29 02:28:35.549095 | orchestrator | Sunday 29 March 2026 02:28:06 +0000 (0:00:09.426) 0:05:12.119 ********** 2026-03-29 02:28:35.549111 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.549128 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.549144 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.549159 | orchestrator | 2026-03-29 02:28:35.549176 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-29 02:28:35.549195 | orchestrator | Sunday 29 March 2026 02:28:07 +0000 (0:00:01.136) 0:05:13.255 ********** 2026-03-29 02:28:35.549213 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:28:35.549230 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:28:35.549247 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:28:35.549262 | orchestrator | 2026-03-29 02:28:35.549280 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-29 02:28:35.549297 | orchestrator | Sunday 29 March 2026 02:28:17 +0000 (0:00:09.982) 0:05:23.238 ********** 2026-03-29 02:28:35.549315 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.549330 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.549346 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.549362 | orchestrator | 2026-03-29 02:28:35.549379 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-29 02:28:35.549397 | orchestrator | Sunday 29 March 2026 02:28:21 +0000 (0:00:03.754) 0:05:26.993 ********** 2026-03-29 02:28:35.549413 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:28:35.549430 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:28:35.549448 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:28:35.549465 | orchestrator | 2026-03-29 02:28:35.549482 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-29 02:28:35.549499 | orchestrator | Sunday 29 March 2026 02:28:30 +0000 (0:00:09.181) 0:05:36.174 ********** 2026-03-29 02:28:35.549537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.549555 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.549572 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.549590 | orchestrator | 2026-03-29 02:28:35.549607 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-29 02:28:35.549625 | orchestrator | Sunday 29 March 2026 02:28:30 +0000 (0:00:00.692) 0:05:36.866 ********** 2026-03-29 02:28:35.549643 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.549660 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.549678 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.549695 | orchestrator | 2026-03-29 02:28:35.549737 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-29 02:28:35.549779 | orchestrator | Sunday 29 March 2026 02:28:31 +0000 (0:00:00.386) 0:05:37.253 ********** 2026-03-29 02:28:35.549797 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.549814 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.549828 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.549845 | orchestrator | 2026-03-29 02:28:35.549862 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-29 02:28:35.549878 | orchestrator | Sunday 29 March 2026 02:28:31 +0000 (0:00:00.349) 0:05:37.603 ********** 2026-03-29 02:28:35.549894 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.549912 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.549929 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.549945 | orchestrator | 2026-03-29 02:28:35.549962 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-29 02:28:35.549978 | orchestrator | Sunday 29 March 2026 02:28:32 +0000 (0:00:00.352) 0:05:37.956 ********** 2026-03-29 02:28:35.549994 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.550087 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.550109 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.550125 | orchestrator | 2026-03-29 02:28:35.550142 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-29 02:28:35.550157 | orchestrator | Sunday 29 March 2026 02:28:32 +0000 (0:00:00.705) 0:05:38.661 ********** 2026-03-29 02:28:35.550175 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:35.550188 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:35.550197 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:35.550207 | orchestrator | 2026-03-29 02:28:35.550216 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-29 02:28:35.550226 | orchestrator | Sunday 29 March 2026 02:28:33 +0000 (0:00:00.344) 0:05:39.006 ********** 2026-03-29 02:28:35.550236 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.550245 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.550255 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.550264 | orchestrator | 2026-03-29 02:28:35.550274 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-29 02:28:35.550283 | orchestrator | Sunday 29 March 2026 02:28:33 +0000 (0:00:00.873) 0:05:39.880 ********** 2026-03-29 02:28:35.550293 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:35.550302 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:35.550312 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:35.550321 | orchestrator | 2026-03-29 02:28:35.550331 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:28:35.550342 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 02:28:35.550353 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 02:28:35.550362 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 02:28:35.550372 | orchestrator | 2026-03-29 02:28:35.550382 | orchestrator | 2026-03-29 02:28:35.550402 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:28:35.550412 | orchestrator | Sunday 29 March 2026 02:28:34 +0000 (0:00:00.852) 0:05:40.732 ********** 2026-03-29 02:28:35.550421 | orchestrator | =============================================================================== 2026-03-29 02:28:35.550431 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.98s 2026-03-29 02:28:35.550441 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.43s 2026-03-29 02:28:35.550450 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.18s 2026-03-29 02:28:35.550460 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.53s 2026-03-29 02:28:35.550469 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2026-03-29 02:28:35.550479 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.22s 2026-03-29 02:28:35.550488 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.02s 2026-03-29 02:28:35.550498 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.94s 2026-03-29 02:28:35.550507 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.89s 2026-03-29 02:28:35.550516 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.75s 2026-03-29 02:28:35.550526 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.74s 2026-03-29 02:28:35.550536 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.44s 2026-03-29 02:28:35.550545 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.41s 2026-03-29 02:28:35.550555 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.36s 2026-03-29 02:28:35.550564 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.35s 2026-03-29 02:28:35.550574 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.35s 2026-03-29 02:28:35.550584 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.33s 2026-03-29 02:28:35.550594 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.29s 2026-03-29 02:28:35.550603 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.25s 2026-03-29 02:28:35.550613 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.24s 2026-03-29 02:28:37.845973 | orchestrator | 2026-03-29 02:28:37 | INFO  | Task cd17e2ce-e4c5-4ec9-abc4-9c45b94d0548 (opensearch) was prepared for execution. 2026-03-29 02:28:37.846122 | orchestrator | 2026-03-29 02:28:37 | INFO  | It takes a moment until task cd17e2ce-e4c5-4ec9-abc4-9c45b94d0548 (opensearch) has been started and output is visible here. 2026-03-29 02:28:48.447594 | orchestrator | 2026-03-29 02:28:48.447739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:28:48.447760 | orchestrator | 2026-03-29 02:28:48.447812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:28:48.447841 | orchestrator | Sunday 29 March 2026 02:28:41 +0000 (0:00:00.256) 0:00:00.256 ********** 2026-03-29 02:28:48.447863 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:28:48.447882 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:28:48.447899 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:28:48.447918 | orchestrator | 2026-03-29 02:28:48.447935 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:28:48.447952 | orchestrator | Sunday 29 March 2026 02:28:42 +0000 (0:00:00.317) 0:00:00.573 ********** 2026-03-29 02:28:48.447992 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-29 02:28:48.448013 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-29 02:28:48.448030 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-29 02:28:48.448047 | orchestrator | 2026-03-29 02:28:48.448065 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-29 02:28:48.448111 | orchestrator | 2026-03-29 02:28:48.448132 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 02:28:48.448150 | orchestrator | Sunday 29 March 2026 02:28:42 +0000 (0:00:00.416) 0:00:00.989 ********** 2026-03-29 02:28:48.448169 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:28:48.448187 | orchestrator | 2026-03-29 02:28:48.448206 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-29 02:28:48.448224 | orchestrator | Sunday 29 March 2026 02:28:43 +0000 (0:00:00.498) 0:00:01.487 ********** 2026-03-29 02:28:48.448240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:28:48.448258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:28:48.448278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 02:28:48.448297 | orchestrator | 2026-03-29 02:28:48.448315 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-29 02:28:48.448336 | orchestrator | Sunday 29 March 2026 02:28:43 +0000 (0:00:00.673) 0:00:02.161 ********** 2026-03-29 02:28:48.448364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:48.448391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:48.448441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:48.448480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:48.448518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:48.448539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:48.448559 | orchestrator | 2026-03-29 02:28:48.448577 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 02:28:48.448595 | orchestrator | Sunday 29 March 2026 02:28:45 +0000 (0:00:01.591) 0:00:03.752 ********** 2026-03-29 02:28:48.448615 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:28:48.448634 | orchestrator | 2026-03-29 02:28:48.448653 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-29 02:28:48.448671 | orchestrator | Sunday 29 March 2026 02:28:46 +0000 (0:00:00.547) 0:00:04.300 ********** 2026-03-29 02:28:48.448713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:49.237221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:49.237353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:49.237373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:49.237388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:49.237474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:49.237490 | orchestrator | 2026-03-29 02:28:49.237504 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-29 02:28:49.237516 | orchestrator | Sunday 29 March 2026 02:28:48 +0000 (0:00:02.411) 0:00:06.712 ********** 2026-03-29 02:28:49.237529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:49.237542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:49.237554 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:49.237568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:49.237604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:50.288989 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:50.289095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:50.289109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:50.289117 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:50.289124 | orchestrator | 2026-03-29 02:28:50.289132 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-29 02:28:50.289141 | orchestrator | Sunday 29 March 2026 02:28:49 +0000 (0:00:00.788) 0:00:07.500 ********** 2026-03-29 02:28:50.289173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:50.289196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:50.289218 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:28:50.289226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:50.289233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:50.289240 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:28:50.289252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 02:28:50.289262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 02:28:50.289269 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:28:50.289276 | orchestrator | 2026-03-29 02:28:50.289282 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-29 02:28:50.289294 | orchestrator | Sunday 29 March 2026 02:28:50 +0000 (0:00:01.044) 0:00:08.544 ********** 2026-03-29 02:28:58.302607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:58.302723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:58.302741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:58.302882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:58.302924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:58.302939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:28:58.302964 | orchestrator | 2026-03-29 02:28:58.302978 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-29 02:28:58.302991 | orchestrator | Sunday 29 March 2026 02:28:52 +0000 (0:00:02.287) 0:00:10.832 ********** 2026-03-29 02:28:58.303002 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:28:58.303014 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:28:58.303025 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:28:58.303036 | orchestrator | 2026-03-29 02:28:58.303047 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-29 02:28:58.303058 | orchestrator | Sunday 29 March 2026 02:28:54 +0000 (0:00:02.243) 0:00:13.076 ********** 2026-03-29 02:28:58.303069 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:28:58.303080 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:28:58.303091 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:28:58.303101 | orchestrator | 2026-03-29 02:28:58.303112 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-29 02:28:58.303123 | orchestrator | Sunday 29 March 2026 02:28:56 +0000 (0:00:01.815) 0:00:14.892 ********** 2026-03-29 02:28:58.303137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:58.303157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:28:58.303179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 02:31:43.760496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:31:43.760640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:31:43.760701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 02:31:43.760738 | orchestrator | 2026-03-29 02:31:43.760757 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 02:31:43.760774 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:01.673) 0:00:16.565 ********** 2026-03-29 02:31:43.760789 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:31:43.760807 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:31:43.760816 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:31:43.760824 | orchestrator | 2026-03-29 02:31:43.760834 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 02:31:43.760843 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:00.287) 0:00:16.853 ********** 2026-03-29 02:31:43.760852 | orchestrator | 2026-03-29 02:31:43.760861 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 02:31:43.760869 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:00.067) 0:00:16.921 ********** 2026-03-29 02:31:43.760878 | orchestrator | 2026-03-29 02:31:43.760887 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 02:31:43.760905 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:00.064) 0:00:16.985 ********** 2026-03-29 02:31:43.760914 | orchestrator | 2026-03-29 02:31:43.760923 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-29 02:31:43.760947 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:00.063) 0:00:17.049 ********** 2026-03-29 02:31:43.760956 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:31:43.760965 | orchestrator | 2026-03-29 02:31:43.760974 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-29 02:31:43.760982 | orchestrator | Sunday 29 March 2026 02:28:58 +0000 (0:00:00.222) 0:00:17.271 ********** 2026-03-29 02:31:43.760991 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:31:43.761000 | orchestrator | 2026-03-29 02:31:43.761010 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-29 02:31:43.761020 | orchestrator | Sunday 29 March 2026 02:28:59 +0000 (0:00:00.764) 0:00:18.036 ********** 2026-03-29 02:31:43.761030 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:43.761060 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:31:43.761070 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:31:43.761080 | orchestrator | 2026-03-29 02:31:43.761089 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-29 02:31:43.761100 | orchestrator | Sunday 29 March 2026 02:30:06 +0000 (0:01:06.571) 0:01:24.607 ********** 2026-03-29 02:31:43.761109 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:43.761119 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:31:43.761129 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:31:43.761139 | orchestrator | 2026-03-29 02:31:43.761152 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 02:31:43.761167 | orchestrator | Sunday 29 March 2026 02:31:31 +0000 (0:01:25.524) 0:02:50.131 ********** 2026-03-29 02:31:43.761190 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:31:43.761207 | orchestrator | 2026-03-29 02:31:43.761221 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-29 02:31:43.761235 | orchestrator | Sunday 29 March 2026 02:31:32 +0000 (0:00:00.507) 0:02:50.639 ********** 2026-03-29 02:31:43.761251 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:31:43.761266 | orchestrator | 2026-03-29 02:31:43.761280 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-29 02:31:43.761294 | orchestrator | Sunday 29 March 2026 02:31:35 +0000 (0:00:03.109) 0:02:53.748 ********** 2026-03-29 02:31:43.761310 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:31:43.761325 | orchestrator | 2026-03-29 02:31:43.761341 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-29 02:31:43.761356 | orchestrator | Sunday 29 March 2026 02:31:37 +0000 (0:00:02.473) 0:02:56.222 ********** 2026-03-29 02:31:43.761371 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:43.761385 | orchestrator | 2026-03-29 02:31:43.761395 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-29 02:31:43.761403 | orchestrator | Sunday 29 March 2026 02:31:40 +0000 (0:00:03.046) 0:02:59.269 ********** 2026-03-29 02:31:43.761412 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:43.761420 | orchestrator | 2026-03-29 02:31:43.761429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:31:43.761439 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 02:31:43.761449 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 02:31:43.761465 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 02:31:43.761474 | orchestrator | 2026-03-29 02:31:43.761483 | orchestrator | 2026-03-29 02:31:43.761501 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:31:43.761510 | orchestrator | Sunday 29 March 2026 02:31:43 +0000 (0:00:02.736) 0:03:02.005 ********** 2026-03-29 02:31:43.761518 | orchestrator | =============================================================================== 2026-03-29 02:31:43.761527 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.52s 2026-03-29 02:31:43.761535 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.57s 2026-03-29 02:31:43.761544 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.11s 2026-03-29 02:31:43.761553 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.05s 2026-03-29 02:31:43.761561 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.74s 2026-03-29 02:31:43.761570 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.47s 2026-03-29 02:31:43.761578 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.41s 2026-03-29 02:31:43.761587 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.29s 2026-03-29 02:31:43.761595 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.24s 2026-03-29 02:31:43.761604 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2026-03-29 02:31:43.761613 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.67s 2026-03-29 02:31:43.761621 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.59s 2026-03-29 02:31:43.761630 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.04s 2026-03-29 02:31:43.761638 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.79s 2026-03-29 02:31:43.761647 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.76s 2026-03-29 02:31:43.761656 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-03-29 02:31:43.761673 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-03-29 02:31:44.082696 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-03-29 02:31:44.082793 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-03-29 02:31:44.082804 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-29 02:31:46.352951 | orchestrator | 2026-03-29 02:31:46 | INFO  | Task 97be7c06-ddfc-4e8e-ab29-d7b69c198029 (memcached) was prepared for execution. 2026-03-29 02:31:46.353054 | orchestrator | 2026-03-29 02:31:46 | INFO  | It takes a moment until task 97be7c06-ddfc-4e8e-ab29-d7b69c198029 (memcached) has been started and output is visible here. 2026-03-29 02:31:58.143220 | orchestrator | 2026-03-29 02:31:58.143367 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:31:58.143388 | orchestrator | 2026-03-29 02:31:58.143404 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:31:58.143418 | orchestrator | Sunday 29 March 2026 02:31:50 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-29 02:31:58.143431 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:31:58.143445 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:31:58.143458 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:31:58.143471 | orchestrator | 2026-03-29 02:31:58.143484 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:31:58.143496 | orchestrator | Sunday 29 March 2026 02:31:50 +0000 (0:00:00.278) 0:00:00.533 ********** 2026-03-29 02:31:58.143510 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-29 02:31:58.143523 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-29 02:31:58.143537 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-29 02:31:58.143549 | orchestrator | 2026-03-29 02:31:58.143561 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-29 02:31:58.143600 | orchestrator | 2026-03-29 02:31:58.143614 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-29 02:31:58.143627 | orchestrator | Sunday 29 March 2026 02:31:51 +0000 (0:00:00.421) 0:00:00.954 ********** 2026-03-29 02:31:58.143640 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:31:58.143655 | orchestrator | 2026-03-29 02:31:58.143669 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-29 02:31:58.143682 | orchestrator | Sunday 29 March 2026 02:31:51 +0000 (0:00:00.466) 0:00:01.420 ********** 2026-03-29 02:31:58.143695 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 02:31:58.143704 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 02:31:58.143711 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 02:31:58.143718 | orchestrator | 2026-03-29 02:31:58.143726 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-29 02:31:58.143733 | orchestrator | Sunday 29 March 2026 02:31:52 +0000 (0:00:00.742) 0:00:02.163 ********** 2026-03-29 02:31:58.143740 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 02:31:58.143748 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 02:31:58.143756 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 02:31:58.143765 | orchestrator | 2026-03-29 02:31:58.143773 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-29 02:31:58.143782 | orchestrator | Sunday 29 March 2026 02:31:54 +0000 (0:00:01.707) 0:00:03.870 ********** 2026-03-29 02:31:58.143803 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:31:58.143811 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:31:58.143819 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:58.143828 | orchestrator | 2026-03-29 02:31:58.143836 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-29 02:31:58.143844 | orchestrator | Sunday 29 March 2026 02:31:55 +0000 (0:00:01.547) 0:00:05.417 ********** 2026-03-29 02:31:58.143852 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:31:58.143861 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:31:58.143869 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:31:58.143877 | orchestrator | 2026-03-29 02:31:58.143885 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:31:58.143894 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:31:58.143903 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:31:58.143912 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:31:58.143920 | orchestrator | 2026-03-29 02:31:58.143929 | orchestrator | 2026-03-29 02:31:58.143937 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:31:58.143946 | orchestrator | Sunday 29 March 2026 02:31:57 +0000 (0:00:02.142) 0:00:07.560 ********** 2026-03-29 02:31:58.143954 | orchestrator | =============================================================================== 2026-03-29 02:31:58.143962 | orchestrator | memcached : Restart memcached container --------------------------------- 2.14s 2026-03-29 02:31:58.143971 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.71s 2026-03-29 02:31:58.143978 | orchestrator | memcached : Check memcached container ----------------------------------- 1.55s 2026-03-29 02:31:58.143986 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.74s 2026-03-29 02:31:58.143995 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.47s 2026-03-29 02:31:58.144006 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-29 02:31:58.144018 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-29 02:32:00.426243 | orchestrator | 2026-03-29 02:32:00 | INFO  | Task 56e80ed2-50d4-4047-8935-f7ae02c5b80f (redis) was prepared for execution. 2026-03-29 02:32:00.426343 | orchestrator | 2026-03-29 02:32:00 | INFO  | It takes a moment until task 56e80ed2-50d4-4047-8935-f7ae02c5b80f (redis) has been started and output is visible here. 2026-03-29 02:32:09.533798 | orchestrator | 2026-03-29 02:32:09.533883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:32:09.533893 | orchestrator | 2026-03-29 02:32:09.533900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:32:09.533906 | orchestrator | Sunday 29 March 2026 02:32:04 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-03-29 02:32:09.533912 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:32:09.533919 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:32:09.533924 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:32:09.533930 | orchestrator | 2026-03-29 02:32:09.533936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:32:09.533941 | orchestrator | Sunday 29 March 2026 02:32:04 +0000 (0:00:00.322) 0:00:00.586 ********** 2026-03-29 02:32:09.533947 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-29 02:32:09.533953 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-29 02:32:09.533959 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-29 02:32:09.533964 | orchestrator | 2026-03-29 02:32:09.533970 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-29 02:32:09.533975 | orchestrator | 2026-03-29 02:32:09.533981 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-29 02:32:09.533986 | orchestrator | Sunday 29 March 2026 02:32:05 +0000 (0:00:00.423) 0:00:01.009 ********** 2026-03-29 02:32:09.533991 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:32:09.533998 | orchestrator | 2026-03-29 02:32:09.534003 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-29 02:32:09.534009 | orchestrator | Sunday 29 March 2026 02:32:05 +0000 (0:00:00.522) 0:00:01.532 ********** 2026-03-29 02:32:09.534050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534184 | orchestrator | 2026-03-29 02:32:09.534193 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-29 02:32:09.534201 | orchestrator | Sunday 29 March 2026 02:32:06 +0000 (0:00:01.118) 0:00:02.651 ********** 2026-03-29 02:32:09.534211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:09.534368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592426 | orchestrator | 2026-03-29 02:32:13.592441 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-29 02:32:13.592454 | orchestrator | Sunday 29 March 2026 02:32:09 +0000 (0:00:02.556) 0:00:05.207 ********** 2026-03-29 02:32:13.592467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592593 | orchestrator | 2026-03-29 02:32:13.592600 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-29 02:32:13.592607 | orchestrator | Sunday 29 March 2026 02:32:11 +0000 (0:00:02.388) 0:00:07.596 ********** 2026-03-29 02:32:13.592613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:13.592684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 02:32:24.862646 | orchestrator | 2026-03-29 02:32:24.862728 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 02:32:24.862737 | orchestrator | Sunday 29 March 2026 02:32:13 +0000 (0:00:01.487) 0:00:09.083 ********** 2026-03-29 02:32:24.862743 | orchestrator | 2026-03-29 02:32:24.862751 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 02:32:24.862760 | orchestrator | Sunday 29 March 2026 02:32:13 +0000 (0:00:00.057) 0:00:09.141 ********** 2026-03-29 02:32:24.862768 | orchestrator | 2026-03-29 02:32:24.862778 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 02:32:24.862786 | orchestrator | Sunday 29 March 2026 02:32:13 +0000 (0:00:00.061) 0:00:09.203 ********** 2026-03-29 02:32:24.862795 | orchestrator | 2026-03-29 02:32:24.862803 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-29 02:32:24.862811 | orchestrator | Sunday 29 March 2026 02:32:13 +0000 (0:00:00.062) 0:00:09.265 ********** 2026-03-29 02:32:24.862819 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:24.862828 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:32:24.862837 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:32:24.862846 | orchestrator | 2026-03-29 02:32:24.862854 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-29 02:32:24.862863 | orchestrator | Sunday 29 March 2026 02:32:21 +0000 (0:00:07.861) 0:00:17.126 ********** 2026-03-29 02:32:24.862893 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:24.862901 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:32:24.862909 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:32:24.862918 | orchestrator | 2026-03-29 02:32:24.862926 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:32:24.862935 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:32:24.862945 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:32:24.862966 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:32:24.862974 | orchestrator | 2026-03-29 02:32:24.862983 | orchestrator | 2026-03-29 02:32:24.862991 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:32:24.862999 | orchestrator | Sunday 29 March 2026 02:32:24 +0000 (0:00:03.177) 0:00:20.303 ********** 2026-03-29 02:32:24.863004 | orchestrator | =============================================================================== 2026-03-29 02:32:24.863009 | orchestrator | redis : Restart redis container ----------------------------------------- 7.86s 2026-03-29 02:32:24.863014 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.18s 2026-03-29 02:32:24.863019 | orchestrator | redis : Copying over default config.json files -------------------------- 2.56s 2026-03-29 02:32:24.863024 | orchestrator | redis : Copying over redis config files --------------------------------- 2.39s 2026-03-29 02:32:24.863029 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-03-29 02:32:24.863034 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.12s 2026-03-29 02:32:24.863038 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-03-29 02:32:24.863043 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-29 02:32:24.863048 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-29 02:32:24.863053 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.18s 2026-03-29 02:32:26.700909 | orchestrator | 2026-03-29 02:32:26 | INFO  | Task 027ebd73-0261-47ad-94c1-55224fe27ce8 (mariadb) was prepared for execution. 2026-03-29 02:32:26.701020 | orchestrator | 2026-03-29 02:32:26 | INFO  | It takes a moment until task 027ebd73-0261-47ad-94c1-55224fe27ce8 (mariadb) has been started and output is visible here. 2026-03-29 02:32:40.162945 | orchestrator | 2026-03-29 02:32:40.163074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:32:40.163091 | orchestrator | 2026-03-29 02:32:40.163103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:32:40.163169 | orchestrator | Sunday 29 March 2026 02:32:30 +0000 (0:00:00.168) 0:00:00.168 ********** 2026-03-29 02:32:40.163182 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:32:40.163194 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:32:40.163205 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:32:40.163216 | orchestrator | 2026-03-29 02:32:40.163227 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:32:40.163240 | orchestrator | Sunday 29 March 2026 02:32:30 +0000 (0:00:00.309) 0:00:00.477 ********** 2026-03-29 02:32:40.163251 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 02:32:40.163263 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 02:32:40.163274 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 02:32:40.163285 | orchestrator | 2026-03-29 02:32:40.163296 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 02:32:40.163307 | orchestrator | 2026-03-29 02:32:40.163318 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 02:32:40.163355 | orchestrator | Sunday 29 March 2026 02:32:31 +0000 (0:00:00.543) 0:00:01.021 ********** 2026-03-29 02:32:40.163366 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 02:32:40.163378 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 02:32:40.163389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 02:32:40.163399 | orchestrator | 2026-03-29 02:32:40.163410 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 02:32:40.163422 | orchestrator | Sunday 29 March 2026 02:32:31 +0000 (0:00:00.362) 0:00:01.383 ********** 2026-03-29 02:32:40.163433 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:32:40.163447 | orchestrator | 2026-03-29 02:32:40.163460 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-29 02:32:40.163473 | orchestrator | Sunday 29 March 2026 02:32:32 +0000 (0:00:00.499) 0:00:01.883 ********** 2026-03-29 02:32:40.163508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:40.163547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:40.163578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:40.163592 | orchestrator | 2026-03-29 02:32:40.163605 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-29 02:32:40.163617 | orchestrator | Sunday 29 March 2026 02:32:34 +0000 (0:00:02.690) 0:00:04.573 ********** 2026-03-29 02:32:40.163630 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:40.163643 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:40.163655 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:40.163668 | orchestrator | 2026-03-29 02:32:40.163681 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-29 02:32:40.163693 | orchestrator | Sunday 29 March 2026 02:32:35 +0000 (0:00:00.618) 0:00:05.191 ********** 2026-03-29 02:32:40.163706 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:40.163719 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:40.163731 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:40.163743 | orchestrator | 2026-03-29 02:32:40.163756 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-29 02:32:40.163769 | orchestrator | Sunday 29 March 2026 02:32:37 +0000 (0:00:01.497) 0:00:06.689 ********** 2026-03-29 02:32:40.163793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:47.853760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:47.853877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:47.853918 | orchestrator | 2026-03-29 02:32:47.853933 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-29 02:32:47.853947 | orchestrator | Sunday 29 March 2026 02:32:40 +0000 (0:00:03.113) 0:00:09.803 ********** 2026-03-29 02:32:47.853959 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:47.853971 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:47.853982 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:47.853993 | orchestrator | 2026-03-29 02:32:47.854005 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-29 02:32:47.854103 | orchestrator | Sunday 29 March 2026 02:32:41 +0000 (0:00:01.120) 0:00:10.924 ********** 2026-03-29 02:32:47.854171 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:32:47.854184 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:32:47.854195 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:32:47.854206 | orchestrator | 2026-03-29 02:32:47.854218 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 02:32:47.854229 | orchestrator | Sunday 29 March 2026 02:32:45 +0000 (0:00:03.911) 0:00:14.835 ********** 2026-03-29 02:32:47.854240 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:32:47.854252 | orchestrator | 2026-03-29 02:32:47.854265 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 02:32:47.854277 | orchestrator | Sunday 29 March 2026 02:32:45 +0000 (0:00:00.506) 0:00:15.342 ********** 2026-03-29 02:32:47.854300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:47.854325 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:32:47.854349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:52.393789 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:52.393945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:52.393994 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:52.394008 | orchestrator | 2026-03-29 02:32:52.394096 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 02:32:52.394110 | orchestrator | Sunday 29 March 2026 02:32:47 +0000 (0:00:02.157) 0:00:17.500 ********** 2026-03-29 02:32:52.394169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:52.394183 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:32:52.394224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:52.394248 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:52.394260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:52.394272 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:52.394283 | orchestrator | 2026-03-29 02:32:52.394294 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 02:32:52.394305 | orchestrator | Sunday 29 March 2026 02:32:50 +0000 (0:00:02.272) 0:00:19.772 ********** 2026-03-29 02:32:52.394330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:55.023010 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:32:55.023112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:55.023126 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:32:55.023170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 02:32:55.023190 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:32:55.023196 | orchestrator | 2026-03-29 02:32:55.023204 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-29 02:32:55.023211 | orchestrator | Sunday 29 March 2026 02:32:52 +0000 (0:00:02.271) 0:00:22.043 ********** 2026-03-29 02:32:55.023230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:55.023238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:32:55.023253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 02:35:07.599813 | orchestrator | 2026-03-29 02:35:07.599942 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-29 02:35:07.599962 | orchestrator | Sunday 29 March 2026 02:32:55 +0000 (0:00:02.626) 0:00:24.670 ********** 2026-03-29 02:35:07.599975 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:07.599988 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:35:07.599999 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:35:07.600010 | orchestrator | 2026-03-29 02:35:07.600021 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-29 02:35:07.600033 | orchestrator | Sunday 29 March 2026 02:32:55 +0000 (0:00:00.830) 0:00:25.501 ********** 2026-03-29 02:35:07.600044 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.600056 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.600067 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.600078 | orchestrator | 2026-03-29 02:35:07.600089 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-29 02:35:07.600100 | orchestrator | Sunday 29 March 2026 02:32:56 +0000 (0:00:00.488) 0:00:25.990 ********** 2026-03-29 02:35:07.600111 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.600122 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.600133 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.600143 | orchestrator | 2026-03-29 02:35:07.600154 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-29 02:35:07.600165 | orchestrator | Sunday 29 March 2026 02:32:56 +0000 (0:00:00.313) 0:00:26.303 ********** 2026-03-29 02:35:07.600178 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-29 02:35:07.600190 | orchestrator | ...ignoring 2026-03-29 02:35:07.600202 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-29 02:35:07.600213 | orchestrator | ...ignoring 2026-03-29 02:35:07.600239 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-29 02:35:07.600250 | orchestrator | ...ignoring 2026-03-29 02:35:07.600400 | orchestrator | 2026-03-29 02:35:07.600433 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-29 02:35:07.600448 | orchestrator | Sunday 29 March 2026 02:33:07 +0000 (0:00:10.883) 0:00:37.186 ********** 2026-03-29 02:35:07.600462 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.600473 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.600484 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.600495 | orchestrator | 2026-03-29 02:35:07.600507 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-29 02:35:07.600518 | orchestrator | Sunday 29 March 2026 02:33:07 +0000 (0:00:00.448) 0:00:37.635 ********** 2026-03-29 02:35:07.600529 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.600540 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.600551 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.600562 | orchestrator | 2026-03-29 02:35:07.600574 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-29 02:35:07.600585 | orchestrator | Sunday 29 March 2026 02:33:08 +0000 (0:00:00.651) 0:00:38.286 ********** 2026-03-29 02:35:07.600596 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.600607 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.600618 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.600629 | orchestrator | 2026-03-29 02:35:07.600655 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-29 02:35:07.600667 | orchestrator | Sunday 29 March 2026 02:33:09 +0000 (0:00:00.443) 0:00:38.730 ********** 2026-03-29 02:35:07.600678 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.600689 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.600700 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.600711 | orchestrator | 2026-03-29 02:35:07.600722 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-29 02:35:07.600733 | orchestrator | Sunday 29 March 2026 02:33:09 +0000 (0:00:00.447) 0:00:39.178 ********** 2026-03-29 02:35:07.600744 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.600755 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.600766 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.600777 | orchestrator | 2026-03-29 02:35:07.600788 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-29 02:35:07.600800 | orchestrator | Sunday 29 March 2026 02:33:09 +0000 (0:00:00.407) 0:00:39.586 ********** 2026-03-29 02:35:07.600811 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.600822 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.600833 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.600844 | orchestrator | 2026-03-29 02:35:07.600855 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 02:35:07.600866 | orchestrator | Sunday 29 March 2026 02:33:10 +0000 (0:00:00.812) 0:00:40.398 ********** 2026-03-29 02:35:07.600877 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.600888 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.600899 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-29 02:35:07.600910 | orchestrator | 2026-03-29 02:35:07.600921 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-29 02:35:07.600932 | orchestrator | Sunday 29 March 2026 02:33:11 +0000 (0:00:00.386) 0:00:40.785 ********** 2026-03-29 02:35:07.600943 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:07.600954 | orchestrator | 2026-03-29 02:35:07.600965 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-29 02:35:07.600976 | orchestrator | Sunday 29 March 2026 02:33:21 +0000 (0:00:10.454) 0:00:51.240 ********** 2026-03-29 02:35:07.600987 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.600998 | orchestrator | 2026-03-29 02:35:07.601009 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 02:35:07.601021 | orchestrator | Sunday 29 March 2026 02:33:21 +0000 (0:00:00.119) 0:00:51.359 ********** 2026-03-29 02:35:07.601032 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.601072 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.601089 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.601107 | orchestrator | 2026-03-29 02:35:07.601126 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-29 02:35:07.601144 | orchestrator | Sunday 29 March 2026 02:33:22 +0000 (0:00:00.938) 0:00:52.298 ********** 2026-03-29 02:35:07.601163 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:07.601182 | orchestrator | 2026-03-29 02:35:07.601201 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-29 02:35:07.601213 | orchestrator | Sunday 29 March 2026 02:33:29 +0000 (0:00:07.126) 0:00:59.424 ********** 2026-03-29 02:35:07.601224 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.601235 | orchestrator | 2026-03-29 02:35:07.601246 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-29 02:35:07.601257 | orchestrator | Sunday 29 March 2026 02:33:32 +0000 (0:00:02.547) 0:01:01.971 ********** 2026-03-29 02:35:07.601268 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.601278 | orchestrator | 2026-03-29 02:35:07.601318 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-29 02:35:07.601330 | orchestrator | Sunday 29 March 2026 02:33:34 +0000 (0:00:02.399) 0:01:04.371 ********** 2026-03-29 02:35:07.601341 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:07.601352 | orchestrator | 2026-03-29 02:35:07.601363 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-29 02:35:07.601374 | orchestrator | Sunday 29 March 2026 02:33:34 +0000 (0:00:00.119) 0:01:04.491 ********** 2026-03-29 02:35:07.601385 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.601395 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:07.601406 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:07.601417 | orchestrator | 2026-03-29 02:35:07.601428 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-29 02:35:07.601439 | orchestrator | Sunday 29 March 2026 02:33:35 +0000 (0:00:00.304) 0:01:04.796 ********** 2026-03-29 02:35:07.601450 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:07.601461 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 02:35:07.601472 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:35:07.601483 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:35:07.601493 | orchestrator | 2026-03-29 02:35:07.601504 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 02:35:07.601515 | orchestrator | skipping: no hosts matched 2026-03-29 02:35:07.601526 | orchestrator | 2026-03-29 02:35:07.601537 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 02:35:07.601548 | orchestrator | 2026-03-29 02:35:07.601559 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 02:35:07.601570 | orchestrator | Sunday 29 March 2026 02:33:35 +0000 (0:00:00.541) 0:01:05.337 ********** 2026-03-29 02:35:07.601580 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:35:07.601591 | orchestrator | 2026-03-29 02:35:07.601602 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 02:35:07.601613 | orchestrator | Sunday 29 March 2026 02:33:52 +0000 (0:00:17.191) 0:01:22.528 ********** 2026-03-29 02:35:07.601624 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.601635 | orchestrator | 2026-03-29 02:35:07.601646 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 02:35:07.601657 | orchestrator | Sunday 29 March 2026 02:34:09 +0000 (0:00:16.592) 0:01:39.120 ********** 2026-03-29 02:35:07.601667 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:07.601678 | orchestrator | 2026-03-29 02:35:07.601694 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 02:35:07.601705 | orchestrator | 2026-03-29 02:35:07.601723 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 02:35:07.601734 | orchestrator | Sunday 29 March 2026 02:34:11 +0000 (0:00:02.365) 0:01:41.486 ********** 2026-03-29 02:35:07.601753 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:35:07.601764 | orchestrator | 2026-03-29 02:35:07.601775 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 02:35:07.601786 | orchestrator | Sunday 29 March 2026 02:34:28 +0000 (0:00:17.167) 0:01:58.654 ********** 2026-03-29 02:35:07.601797 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.601808 | orchestrator | 2026-03-29 02:35:07.601819 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 02:35:07.601830 | orchestrator | Sunday 29 March 2026 02:34:45 +0000 (0:00:16.651) 0:02:15.306 ********** 2026-03-29 02:35:07.601841 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:07.601852 | orchestrator | 2026-03-29 02:35:07.601863 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 02:35:07.601874 | orchestrator | 2026-03-29 02:35:07.601885 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 02:35:07.601896 | orchestrator | Sunday 29 March 2026 02:34:48 +0000 (0:00:02.517) 0:02:17.824 ********** 2026-03-29 02:35:07.601907 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:07.601918 | orchestrator | 2026-03-29 02:35:07.601929 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 02:35:07.601940 | orchestrator | Sunday 29 March 2026 02:34:59 +0000 (0:00:11.634) 0:02:29.459 ********** 2026-03-29 02:35:07.601950 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.601961 | orchestrator | 2026-03-29 02:35:07.601972 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 02:35:07.601983 | orchestrator | Sunday 29 March 2026 02:35:04 +0000 (0:00:04.607) 0:02:34.066 ********** 2026-03-29 02:35:07.601994 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:07.602005 | orchestrator | 2026-03-29 02:35:07.602069 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 02:35:07.602084 | orchestrator | 2026-03-29 02:35:07.602095 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 02:35:07.602106 | orchestrator | Sunday 29 March 2026 02:35:07 +0000 (0:00:02.648) 0:02:36.715 ********** 2026-03-29 02:35:07.602117 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:35:07.602128 | orchestrator | 2026-03-29 02:35:07.602139 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-29 02:35:07.602171 | orchestrator | Sunday 29 March 2026 02:35:07 +0000 (0:00:00.522) 0:02:37.237 ********** 2026-03-29 02:35:21.831240 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:21.831355 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:21.831363 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:21.831368 | orchestrator | 2026-03-29 02:35:21.831375 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-29 02:35:21.831381 | orchestrator | Sunday 29 March 2026 02:35:10 +0000 (0:00:02.545) 0:02:39.783 ********** 2026-03-29 02:35:21.831386 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:21.831390 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:21.831395 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:21.831400 | orchestrator | 2026-03-29 02:35:21.831405 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-29 02:35:21.831409 | orchestrator | Sunday 29 March 2026 02:35:12 +0000 (0:00:02.341) 0:02:42.124 ********** 2026-03-29 02:35:21.831414 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:21.831419 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:21.831423 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:21.831428 | orchestrator | 2026-03-29 02:35:21.831432 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-29 02:35:21.831437 | orchestrator | Sunday 29 March 2026 02:35:15 +0000 (0:00:02.861) 0:02:44.986 ********** 2026-03-29 02:35:21.831442 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:21.831446 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:21.831451 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:21.831455 | orchestrator | 2026-03-29 02:35:21.831478 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-29 02:35:21.831483 | orchestrator | Sunday 29 March 2026 02:35:17 +0000 (0:00:02.654) 0:02:47.641 ********** 2026-03-29 02:35:21.831487 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:21.831493 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:21.831498 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:21.831502 | orchestrator | 2026-03-29 02:35:21.831507 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 02:35:21.831511 | orchestrator | Sunday 29 March 2026 02:35:20 +0000 (0:00:02.984) 0:02:50.625 ********** 2026-03-29 02:35:21.831516 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:21.831520 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:21.831525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:21.831529 | orchestrator | 2026-03-29 02:35:21.831534 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:35:21.831539 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-29 02:35:21.831546 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 02:35:21.831550 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 02:35:21.831555 | orchestrator | 2026-03-29 02:35:21.831559 | orchestrator | 2026-03-29 02:35:21.831564 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:35:21.831569 | orchestrator | Sunday 29 March 2026 02:35:21 +0000 (0:00:00.455) 0:02:51.080 ********** 2026-03-29 02:35:21.831573 | orchestrator | =============================================================================== 2026-03-29 02:35:21.831588 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.36s 2026-03-29 02:35:21.831592 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.24s 2026-03-29 02:35:21.831597 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.64s 2026-03-29 02:35:21.831601 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2026-03-29 02:35:21.831606 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.45s 2026-03-29 02:35:21.831610 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.13s 2026-03-29 02:35:21.831615 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2026-03-29 02:35:21.831619 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2026-03-29 02:35:21.831624 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.91s 2026-03-29 02:35:21.831636 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.11s 2026-03-29 02:35:21.831641 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.98s 2026-03-29 02:35:21.831645 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.86s 2026-03-29 02:35:21.831650 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.69s 2026-03-29 02:35:21.831654 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.65s 2026-03-29 02:35:21.831659 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.65s 2026-03-29 02:35:21.831663 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.63s 2026-03-29 02:35:21.831668 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.55s 2026-03-29 02:35:21.831672 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.55s 2026-03-29 02:35:21.831676 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.40s 2026-03-29 02:35:21.831681 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.34s 2026-03-29 02:35:24.397699 | orchestrator | 2026-03-29 02:35:24 | INFO  | Task 1f7f9dbe-6f92-49af-9088-a50fe0c19000 (rabbitmq) was prepared for execution. 2026-03-29 02:35:24.397799 | orchestrator | 2026-03-29 02:35:24 | INFO  | It takes a moment until task 1f7f9dbe-6f92-49af-9088-a50fe0c19000 (rabbitmq) has been started and output is visible here. 2026-03-29 02:35:38.062250 | orchestrator | 2026-03-29 02:35:38.062391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:35:38.062404 | orchestrator | 2026-03-29 02:35:38.062412 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:35:38.062419 | orchestrator | Sunday 29 March 2026 02:35:28 +0000 (0:00:00.192) 0:00:00.193 ********** 2026-03-29 02:35:38.062426 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:38.062434 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:35:38.062440 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:35:38.062447 | orchestrator | 2026-03-29 02:35:38.062453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:35:38.062460 | orchestrator | Sunday 29 March 2026 02:35:29 +0000 (0:00:00.326) 0:00:00.519 ********** 2026-03-29 02:35:38.062466 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-29 02:35:38.062473 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-29 02:35:38.062480 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-29 02:35:38.062486 | orchestrator | 2026-03-29 02:35:38.062493 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-29 02:35:38.062499 | orchestrator | 2026-03-29 02:35:38.062506 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 02:35:38.062512 | orchestrator | Sunday 29 March 2026 02:35:29 +0000 (0:00:00.615) 0:00:01.135 ********** 2026-03-29 02:35:38.062519 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:35:38.062526 | orchestrator | 2026-03-29 02:35:38.062533 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 02:35:38.062539 | orchestrator | Sunday 29 March 2026 02:35:30 +0000 (0:00:00.524) 0:00:01.660 ********** 2026-03-29 02:35:38.062545 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:38.062552 | orchestrator | 2026-03-29 02:35:38.062558 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-29 02:35:38.062564 | orchestrator | Sunday 29 March 2026 02:35:31 +0000 (0:00:01.032) 0:00:02.693 ********** 2026-03-29 02:35:38.062571 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062583 | orchestrator | 2026-03-29 02:35:38.062594 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-29 02:35:38.062604 | orchestrator | Sunday 29 March 2026 02:35:31 +0000 (0:00:00.399) 0:00:03.092 ********** 2026-03-29 02:35:38.062613 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062624 | orchestrator | 2026-03-29 02:35:38.062634 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-29 02:35:38.062644 | orchestrator | Sunday 29 March 2026 02:35:32 +0000 (0:00:00.360) 0:00:03.453 ********** 2026-03-29 02:35:38.062656 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062666 | orchestrator | 2026-03-29 02:35:38.062676 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-29 02:35:38.062688 | orchestrator | Sunday 29 March 2026 02:35:32 +0000 (0:00:00.396) 0:00:03.849 ********** 2026-03-29 02:35:38.062698 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062709 | orchestrator | 2026-03-29 02:35:38.062716 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 02:35:38.062722 | orchestrator | Sunday 29 March 2026 02:35:33 +0000 (0:00:00.595) 0:00:04.445 ********** 2026-03-29 02:35:38.062743 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:35:38.062750 | orchestrator | 2026-03-29 02:35:38.062771 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 02:35:38.062778 | orchestrator | Sunday 29 March 2026 02:35:33 +0000 (0:00:00.895) 0:00:05.341 ********** 2026-03-29 02:35:38.062784 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:35:38.062790 | orchestrator | 2026-03-29 02:35:38.062798 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-29 02:35:38.062805 | orchestrator | Sunday 29 March 2026 02:35:34 +0000 (0:00:00.868) 0:00:06.210 ********** 2026-03-29 02:35:38.062812 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062820 | orchestrator | 2026-03-29 02:35:38.062827 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-29 02:35:38.062834 | orchestrator | Sunday 29 March 2026 02:35:35 +0000 (0:00:00.367) 0:00:06.578 ********** 2026-03-29 02:35:38.062841 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:38.062849 | orchestrator | 2026-03-29 02:35:38.062856 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-29 02:35:38.062863 | orchestrator | Sunday 29 March 2026 02:35:35 +0000 (0:00:00.354) 0:00:06.932 ********** 2026-03-29 02:35:38.062890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:38.062901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:38.062910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:38.062923 | orchestrator | 2026-03-29 02:35:38.062935 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-29 02:35:38.062942 | orchestrator | Sunday 29 March 2026 02:35:36 +0000 (0:00:00.805) 0:00:07.738 ********** 2026-03-29 02:35:38.062950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:38.062966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:56.866176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:56.866271 | orchestrator | 2026-03-29 02:35:56.866279 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-29 02:35:56.866285 | orchestrator | Sunday 29 March 2026 02:35:38 +0000 (0:00:01.688) 0:00:09.427 ********** 2026-03-29 02:35:56.866303 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 02:35:56.866308 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 02:35:56.866312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 02:35:56.866316 | orchestrator | 2026-03-29 02:35:56.866320 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-29 02:35:56.866323 | orchestrator | Sunday 29 March 2026 02:35:39 +0000 (0:00:01.438) 0:00:10.866 ********** 2026-03-29 02:35:56.866327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 02:35:56.866390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 02:35:56.866397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 02:35:56.866406 | orchestrator | 2026-03-29 02:35:56.866415 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-29 02:35:56.866420 | orchestrator | Sunday 29 March 2026 02:35:41 +0000 (0:00:01.751) 0:00:12.617 ********** 2026-03-29 02:35:56.866426 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 02:35:56.866433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 02:35:56.866439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 02:35:56.866445 | orchestrator | 2026-03-29 02:35:56.866451 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-29 02:35:56.866456 | orchestrator | Sunday 29 March 2026 02:35:42 +0000 (0:00:01.379) 0:00:13.997 ********** 2026-03-29 02:35:56.866463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 02:35:56.866469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 02:35:56.866476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 02:35:56.866482 | orchestrator | 2026-03-29 02:35:56.866488 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-29 02:35:56.866494 | orchestrator | Sunday 29 March 2026 02:35:44 +0000 (0:00:01.825) 0:00:15.822 ********** 2026-03-29 02:35:56.866500 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 02:35:56.866507 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 02:35:56.866513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 02:35:56.866520 | orchestrator | 2026-03-29 02:35:56.866526 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-29 02:35:56.866533 | orchestrator | Sunday 29 March 2026 02:35:45 +0000 (0:00:01.418) 0:00:17.241 ********** 2026-03-29 02:35:56.866539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 02:35:56.866546 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 02:35:56.866552 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 02:35:56.866558 | orchestrator | 2026-03-29 02:35:56.866564 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 02:35:56.866570 | orchestrator | Sunday 29 March 2026 02:35:47 +0000 (0:00:01.399) 0:00:18.640 ********** 2026-03-29 02:35:56.866577 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:35:56.866584 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:35:56.866605 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:35:56.866619 | orchestrator | 2026-03-29 02:35:56.866625 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-29 02:35:56.866632 | orchestrator | Sunday 29 March 2026 02:35:47 +0000 (0:00:00.477) 0:00:19.118 ********** 2026-03-29 02:35:56.866639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:56.866652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:56.866660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 02:35:56.866667 | orchestrator | 2026-03-29 02:35:56.866674 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-29 02:35:56.866680 | orchestrator | Sunday 29 March 2026 02:35:48 +0000 (0:00:01.204) 0:00:20.322 ********** 2026-03-29 02:35:56.866686 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:56.866693 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:35:56.866699 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:35:56.866705 | orchestrator | 2026-03-29 02:35:56.866712 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-29 02:35:56.866723 | orchestrator | Sunday 29 March 2026 02:35:49 +0000 (0:00:00.920) 0:00:21.242 ********** 2026-03-29 02:35:56.866729 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:35:56.866736 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:35:56.866742 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:35:56.866748 | orchestrator | 2026-03-29 02:35:56.866755 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-29 02:35:56.866765 | orchestrator | Sunday 29 March 2026 02:35:56 +0000 (0:00:06.980) 0:00:28.222 ********** 2026-03-29 02:37:36.002457 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:37:36.002541 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:37:36.002548 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:37:36.002553 | orchestrator | 2026-03-29 02:37:36.002560 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 02:37:36.002566 | orchestrator | 2026-03-29 02:37:36.002571 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 02:37:36.002577 | orchestrator | Sunday 29 March 2026 02:35:57 +0000 (0:00:00.534) 0:00:28.756 ********** 2026-03-29 02:37:36.002582 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:37:36.002588 | orchestrator | 2026-03-29 02:37:36.002593 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 02:37:36.002597 | orchestrator | Sunday 29 March 2026 02:35:57 +0000 (0:00:00.617) 0:00:29.374 ********** 2026-03-29 02:37:36.002602 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:37:36.002607 | orchestrator | 2026-03-29 02:37:36.002612 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 02:37:36.002617 | orchestrator | Sunday 29 March 2026 02:35:58 +0000 (0:00:00.243) 0:00:29.617 ********** 2026-03-29 02:37:36.002622 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:37:36.002627 | orchestrator | 2026-03-29 02:37:36.002632 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 02:37:36.002636 | orchestrator | Sunday 29 March 2026 02:35:59 +0000 (0:00:01.693) 0:00:31.311 ********** 2026-03-29 02:37:36.002641 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:37:36.002646 | orchestrator | 2026-03-29 02:37:36.002651 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 02:37:36.002656 | orchestrator | 2026-03-29 02:37:36.002661 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 02:37:36.002666 | orchestrator | Sunday 29 March 2026 02:36:55 +0000 (0:00:55.942) 0:01:27.253 ********** 2026-03-29 02:37:36.002671 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:37:36.002676 | orchestrator | 2026-03-29 02:37:36.002681 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 02:37:36.002685 | orchestrator | Sunday 29 March 2026 02:36:56 +0000 (0:00:00.636) 0:01:27.890 ********** 2026-03-29 02:37:36.002690 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:37:36.002695 | orchestrator | 2026-03-29 02:37:36.002700 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 02:37:36.002705 | orchestrator | Sunday 29 March 2026 02:36:56 +0000 (0:00:00.221) 0:01:28.111 ********** 2026-03-29 02:37:36.002710 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:37:36.002715 | orchestrator | 2026-03-29 02:37:36.002719 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 02:37:36.002736 | orchestrator | Sunday 29 March 2026 02:36:58 +0000 (0:00:01.595) 0:01:29.706 ********** 2026-03-29 02:37:36.002741 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:37:36.002746 | orchestrator | 2026-03-29 02:37:36.002751 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 02:37:36.002755 | orchestrator | 2026-03-29 02:37:36.002763 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 02:37:36.002771 | orchestrator | Sunday 29 March 2026 02:37:14 +0000 (0:00:16.226) 0:01:45.932 ********** 2026-03-29 02:37:36.002779 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:37:36.002787 | orchestrator | 2026-03-29 02:37:36.002794 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 02:37:36.002821 | orchestrator | Sunday 29 March 2026 02:37:15 +0000 (0:00:00.865) 0:01:46.798 ********** 2026-03-29 02:37:36.002830 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:37:36.002838 | orchestrator | 2026-03-29 02:37:36.002846 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 02:37:36.002854 | orchestrator | Sunday 29 March 2026 02:37:15 +0000 (0:00:00.235) 0:01:47.033 ********** 2026-03-29 02:37:36.002862 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:37:36.002871 | orchestrator | 2026-03-29 02:37:36.002880 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 02:37:36.002885 | orchestrator | Sunday 29 March 2026 02:37:17 +0000 (0:00:01.608) 0:01:48.641 ********** 2026-03-29 02:37:36.002890 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:37:36.002895 | orchestrator | 2026-03-29 02:37:36.002900 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-29 02:37:36.002904 | orchestrator | 2026-03-29 02:37:36.002909 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-29 02:37:36.002914 | orchestrator | Sunday 29 March 2026 02:37:32 +0000 (0:00:15.349) 0:02:03.990 ********** 2026-03-29 02:37:36.002919 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:37:36.002924 | orchestrator | 2026-03-29 02:37:36.002928 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-29 02:37:36.002933 | orchestrator | Sunday 29 March 2026 02:37:33 +0000 (0:00:00.427) 0:02:04.418 ********** 2026-03-29 02:37:36.002938 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 02:37:36.002943 | orchestrator | enable_outward_rabbitmq_True 2026-03-29 02:37:36.002948 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 02:37:36.002953 | orchestrator | outward_rabbitmq_restart 2026-03-29 02:37:36.002958 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:37:36.002963 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:37:36.002968 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:37:36.002972 | orchestrator | 2026-03-29 02:37:36.002977 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-29 02:37:36.002982 | orchestrator | skipping: no hosts matched 2026-03-29 02:37:36.002987 | orchestrator | 2026-03-29 02:37:36.002992 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-29 02:37:36.002996 | orchestrator | skipping: no hosts matched 2026-03-29 02:37:36.003003 | orchestrator | 2026-03-29 02:37:36.003008 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-29 02:37:36.003014 | orchestrator | skipping: no hosts matched 2026-03-29 02:37:36.003020 | orchestrator | 2026-03-29 02:37:36.003026 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:37:36.003044 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 02:37:36.003051 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:37:36.003057 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:37:36.003062 | orchestrator | 2026-03-29 02:37:36.003068 | orchestrator | 2026-03-29 02:37:36.003074 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:37:36.003080 | orchestrator | Sunday 29 March 2026 02:37:35 +0000 (0:00:02.619) 0:02:07.038 ********** 2026-03-29 02:37:36.003086 | orchestrator | =============================================================================== 2026-03-29 02:37:36.003092 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.52s 2026-03-29 02:37:36.003098 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.98s 2026-03-29 02:37:36.003109 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.90s 2026-03-29 02:37:36.003115 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.62s 2026-03-29 02:37:36.003121 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2026-03-29 02:37:36.003125 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.83s 2026-03-29 02:37:36.003130 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.75s 2026-03-29 02:37:36.003135 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.69s 2026-03-29 02:37:36.003140 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.44s 2026-03-29 02:37:36.003144 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.42s 2026-03-29 02:37:36.003149 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2026-03-29 02:37:36.003154 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.38s 2026-03-29 02:37:36.003159 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.20s 2026-03-29 02:37:36.003164 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2026-03-29 02:37:36.003172 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.92s 2026-03-29 02:37:36.003177 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.90s 2026-03-29 02:37:36.003182 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.87s 2026-03-29 02:37:36.003187 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-03-29 02:37:36.003192 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.70s 2026-03-29 02:37:36.003197 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-29 02:37:38.402331 | orchestrator | 2026-03-29 02:37:38 | INFO  | Task 7e1a6f7c-d3d3-4524-8979-6097add43ced (openvswitch) was prepared for execution. 2026-03-29 02:37:38.402427 | orchestrator | 2026-03-29 02:37:38 | INFO  | It takes a moment until task 7e1a6f7c-d3d3-4524-8979-6097add43ced (openvswitch) has been started and output is visible here. 2026-03-29 02:37:51.255771 | orchestrator | 2026-03-29 02:37:51.255892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:37:51.255910 | orchestrator | 2026-03-29 02:37:51.255922 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:37:51.255934 | orchestrator | Sunday 29 March 2026 02:37:42 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-29 02:37:51.255946 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:37:51.255958 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:37:51.255969 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:37:51.255980 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:37:51.255990 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:37:51.256001 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:37:51.256012 | orchestrator | 2026-03-29 02:37:51.256023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:37:51.256034 | orchestrator | Sunday 29 March 2026 02:37:43 +0000 (0:00:00.745) 0:00:01.003 ********** 2026-03-29 02:37:51.256045 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256057 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256068 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256078 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256089 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256100 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 02:37:51.256111 | orchestrator | 2026-03-29 02:37:51.256146 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-29 02:37:51.256158 | orchestrator | 2026-03-29 02:37:51.256170 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-29 02:37:51.256181 | orchestrator | Sunday 29 March 2026 02:37:43 +0000 (0:00:00.591) 0:00:01.594 ********** 2026-03-29 02:37:51.256193 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:37:51.256206 | orchestrator | 2026-03-29 02:37:51.256217 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 02:37:51.256227 | orchestrator | Sunday 29 March 2026 02:37:45 +0000 (0:00:01.160) 0:00:02.754 ********** 2026-03-29 02:37:51.256238 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 02:37:51.256250 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 02:37:51.256261 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 02:37:51.256272 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 02:37:51.256285 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 02:37:51.256298 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 02:37:51.256310 | orchestrator | 2026-03-29 02:37:51.256322 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 02:37:51.256335 | orchestrator | Sunday 29 March 2026 02:37:46 +0000 (0:00:01.268) 0:00:04.023 ********** 2026-03-29 02:37:51.256347 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 02:37:51.256360 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 02:37:51.256372 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 02:37:51.256385 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 02:37:51.256397 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 02:37:51.256410 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 02:37:51.256422 | orchestrator | 2026-03-29 02:37:51.256435 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 02:37:51.256470 | orchestrator | Sunday 29 March 2026 02:37:47 +0000 (0:00:01.497) 0:00:05.520 ********** 2026-03-29 02:37:51.256483 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-29 02:37:51.256496 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:37:51.256510 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-29 02:37:51.256523 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:37:51.256536 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-29 02:37:51.256549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:37:51.256561 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-29 02:37:51.256574 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:37:51.256586 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-29 02:37:51.256598 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:37:51.256611 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-29 02:37:51.256624 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:37:51.256636 | orchestrator | 2026-03-29 02:37:51.256647 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-29 02:37:51.256658 | orchestrator | Sunday 29 March 2026 02:37:49 +0000 (0:00:01.223) 0:00:06.743 ********** 2026-03-29 02:37:51.256669 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:37:51.256680 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:37:51.256691 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:37:51.256702 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:37:51.256712 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:37:51.256723 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:37:51.256734 | orchestrator | 2026-03-29 02:37:51.256744 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-29 02:37:51.256769 | orchestrator | Sunday 29 March 2026 02:37:49 +0000 (0:00:00.809) 0:00:07.553 ********** 2026-03-29 02:37:51.256803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:51.256821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:51.256833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:51.256893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:51.256912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:51.256932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.692925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693165 | orchestrator | 2026-03-29 02:37:53.693177 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-29 02:37:53.693188 | orchestrator | Sunday 29 March 2026 02:37:51 +0000 (0:00:01.447) 0:00:09.000 ********** 2026-03-29 02:37:53.693197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693248 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:53.693265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:56.508904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509127 | orchestrator | 2026-03-29 02:37:56.509139 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-29 02:37:56.509151 | orchestrator | Sunday 29 March 2026 02:37:53 +0000 (0:00:02.452) 0:00:11.453 ********** 2026-03-29 02:37:56.509161 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:37:56.509188 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:37:56.509198 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:37:56.509218 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:37:56.509228 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:37:56.509238 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:37:56.509249 | orchestrator | 2026-03-29 02:37:56.509260 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-29 02:37:56.509270 | orchestrator | Sunday 29 March 2026 02:37:54 +0000 (0:00:01.041) 0:00:12.494 ********** 2026-03-29 02:37:56.509281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:37:56.509345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:38:21.628986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 02:38:21.629294 | orchestrator | 2026-03-29 02:38:21.629311 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629328 | orchestrator | Sunday 29 March 2026 02:37:56 +0000 (0:00:01.760) 0:00:14.254 ********** 2026-03-29 02:38:21.629342 | orchestrator | 2026-03-29 02:38:21.629358 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629373 | orchestrator | Sunday 29 March 2026 02:37:56 +0000 (0:00:00.355) 0:00:14.610 ********** 2026-03-29 02:38:21.629387 | orchestrator | 2026-03-29 02:38:21.629409 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629418 | orchestrator | Sunday 29 March 2026 02:37:57 +0000 (0:00:00.134) 0:00:14.745 ********** 2026-03-29 02:38:21.629426 | orchestrator | 2026-03-29 02:38:21.629435 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629444 | orchestrator | Sunday 29 March 2026 02:37:57 +0000 (0:00:00.136) 0:00:14.881 ********** 2026-03-29 02:38:21.629453 | orchestrator | 2026-03-29 02:38:21.629461 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629470 | orchestrator | Sunday 29 March 2026 02:37:57 +0000 (0:00:00.129) 0:00:15.011 ********** 2026-03-29 02:38:21.629537 | orchestrator | 2026-03-29 02:38:21.629547 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 02:38:21.629557 | orchestrator | Sunday 29 March 2026 02:37:57 +0000 (0:00:00.130) 0:00:15.141 ********** 2026-03-29 02:38:21.629568 | orchestrator | 2026-03-29 02:38:21.629578 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-29 02:38:21.629588 | orchestrator | Sunday 29 March 2026 02:37:57 +0000 (0:00:00.134) 0:00:15.276 ********** 2026-03-29 02:38:21.629598 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:38:21.629610 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:38:21.629619 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:38:21.629630 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:38:21.629640 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:38:21.629650 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:38:21.629660 | orchestrator | 2026-03-29 02:38:21.629670 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-29 02:38:21.629681 | orchestrator | Sunday 29 March 2026 02:38:05 +0000 (0:00:08.287) 0:00:23.564 ********** 2026-03-29 02:38:21.629691 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:38:21.629709 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:38:21.629720 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:38:21.629729 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:38:21.629740 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:38:21.629749 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:38:21.629759 | orchestrator | 2026-03-29 02:38:21.629769 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 02:38:21.629780 | orchestrator | Sunday 29 March 2026 02:38:07 +0000 (0:00:01.129) 0:00:24.694 ********** 2026-03-29 02:38:21.629790 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:38:21.629800 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:38:21.629810 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:38:21.629820 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:38:21.629829 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:38:21.629839 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:38:21.629848 | orchestrator | 2026-03-29 02:38:21.629857 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-29 02:38:21.629865 | orchestrator | Sunday 29 March 2026 02:38:15 +0000 (0:00:08.035) 0:00:32.729 ********** 2026-03-29 02:38:21.629874 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-29 02:38:21.629890 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-29 02:38:21.629905 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-29 02:38:21.629919 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-29 02:38:21.629933 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-29 02:38:21.629948 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-29 02:38:21.629963 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-29 02:38:21.629998 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-29 02:38:34.756658 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-29 02:38:34.756754 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-29 02:38:34.756764 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-29 02:38:34.756771 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-29 02:38:34.756777 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756784 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756790 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756796 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756802 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756808 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 02:38:34.756814 | orchestrator | 2026-03-29 02:38:34.756822 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-29 02:38:34.756830 | orchestrator | Sunday 29 March 2026 02:38:21 +0000 (0:00:06.560) 0:00:39.289 ********** 2026-03-29 02:38:34.756838 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-29 02:38:34.756844 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:38:34.756853 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-29 02:38:34.756859 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:38:34.756865 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-29 02:38:34.756871 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:38:34.756877 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-29 02:38:34.756884 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-29 02:38:34.756890 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-29 02:38:34.756897 | orchestrator | 2026-03-29 02:38:34.756903 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-29 02:38:34.756909 | orchestrator | Sunday 29 March 2026 02:38:24 +0000 (0:00:02.566) 0:00:41.856 ********** 2026-03-29 02:38:34.756916 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-29 02:38:34.756922 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:38:34.756928 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-29 02:38:34.756934 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:38:34.756941 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-29 02:38:34.756947 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:38:34.756953 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-29 02:38:34.756960 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-29 02:38:34.756981 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-29 02:38:34.756988 | orchestrator | 2026-03-29 02:38:34.756994 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 02:38:34.757000 | orchestrator | Sunday 29 March 2026 02:38:27 +0000 (0:00:03.166) 0:00:45.022 ********** 2026-03-29 02:38:34.757006 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:38:34.757012 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:38:34.757018 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:38:34.757043 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:38:34.757049 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:38:34.757055 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:38:34.757060 | orchestrator | 2026-03-29 02:38:34.757066 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:38:34.757074 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 02:38:34.757081 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 02:38:34.757087 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 02:38:34.757093 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 02:38:34.757099 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 02:38:34.757104 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 02:38:34.757131 | orchestrator | 2026-03-29 02:38:34.757137 | orchestrator | 2026-03-29 02:38:34.757143 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:38:34.757149 | orchestrator | Sunday 29 March 2026 02:38:34 +0000 (0:00:07.119) 0:00:52.141 ********** 2026-03-29 02:38:34.757171 | orchestrator | =============================================================================== 2026-03-29 02:38:34.757178 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.15s 2026-03-29 02:38:34.757184 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.29s 2026-03-29 02:38:34.757189 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.56s 2026-03-29 02:38:34.757195 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.17s 2026-03-29 02:38:34.757201 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.57s 2026-03-29 02:38:34.757208 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.45s 2026-03-29 02:38:34.757214 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.76s 2026-03-29 02:38:34.757219 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.50s 2026-03-29 02:38:34.757225 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.45s 2026-03-29 02:38:34.757232 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2026-03-29 02:38:34.757238 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.22s 2026-03-29 02:38:34.757244 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.16s 2026-03-29 02:38:34.757250 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-03-29 02:38:34.757256 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.04s 2026-03-29 02:38:34.757263 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2026-03-29 02:38:34.757269 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.81s 2026-03-29 02:38:34.757276 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-03-29 02:38:34.757282 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-03-29 02:38:36.752095 | orchestrator | 2026-03-29 02:38:36 | INFO  | Task 39c49b56-3896-44a4-bcb1-5b0166dfa0ae (ovn) was prepared for execution. 2026-03-29 02:38:36.752171 | orchestrator | 2026-03-29 02:38:36 | INFO  | It takes a moment until task 39c49b56-3896-44a4-bcb1-5b0166dfa0ae (ovn) has been started and output is visible here. 2026-03-29 02:38:47.271465 | orchestrator | 2026-03-29 02:38:47.271625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 02:38:47.271649 | orchestrator | 2026-03-29 02:38:47.271664 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 02:38:47.271679 | orchestrator | Sunday 29 March 2026 02:38:40 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-03-29 02:38:47.271694 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:38:47.271709 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:38:47.271722 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:38:47.271735 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:38:47.271749 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:38:47.271762 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:38:47.271775 | orchestrator | 2026-03-29 02:38:47.271788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 02:38:47.271801 | orchestrator | Sunday 29 March 2026 02:38:41 +0000 (0:00:00.706) 0:00:00.875 ********** 2026-03-29 02:38:47.271834 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-29 02:38:47.271849 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-29 02:38:47.271862 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-29 02:38:47.271876 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-29 02:38:47.271890 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-29 02:38:47.271905 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-29 02:38:47.271919 | orchestrator | 2026-03-29 02:38:47.271935 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-29 02:38:47.271950 | orchestrator | 2026-03-29 02:38:47.271965 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-29 02:38:47.271979 | orchestrator | Sunday 29 March 2026 02:38:42 +0000 (0:00:00.849) 0:00:01.725 ********** 2026-03-29 02:38:47.271995 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:38:47.272010 | orchestrator | 2026-03-29 02:38:47.272024 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-29 02:38:47.272039 | orchestrator | Sunday 29 March 2026 02:38:43 +0000 (0:00:01.169) 0:00:02.895 ********** 2026-03-29 02:38:47.272055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272197 | orchestrator | 2026-03-29 02:38:47.272211 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-29 02:38:47.272225 | orchestrator | Sunday 29 March 2026 02:38:44 +0000 (0:00:01.266) 0:00:04.161 ********** 2026-03-29 02:38:47.272249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272348 | orchestrator | 2026-03-29 02:38:47.272363 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-29 02:38:47.272377 | orchestrator | Sunday 29 March 2026 02:38:46 +0000 (0:00:01.503) 0:00:05.665 ********** 2026-03-29 02:38:47.272392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:38:47.272475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459342 | orchestrator | 2026-03-29 02:39:12.459356 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-29 02:39:12.459368 | orchestrator | Sunday 29 March 2026 02:38:47 +0000 (0:00:01.194) 0:00:06.859 ********** 2026-03-29 02:39:12.459380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459490 | orchestrator | 2026-03-29 02:39:12.459502 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-29 02:39:12.459513 | orchestrator | Sunday 29 March 2026 02:38:48 +0000 (0:00:01.523) 0:00:08.383 ********** 2026-03-29 02:39:12.459626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:12.459709 | orchestrator | 2026-03-29 02:39:12.459721 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-29 02:39:12.459735 | orchestrator | Sunday 29 March 2026 02:38:50 +0000 (0:00:01.470) 0:00:09.854 ********** 2026-03-29 02:39:12.459748 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:39:12.459763 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:39:12.459776 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:39:12.459788 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:39:12.459801 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:39:12.459813 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:39:12.459825 | orchestrator | 2026-03-29 02:39:12.459838 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-29 02:39:12.459850 | orchestrator | Sunday 29 March 2026 02:38:52 +0000 (0:00:02.537) 0:00:12.391 ********** 2026-03-29 02:39:12.459863 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-29 02:39:12.459876 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-29 02:39:12.459889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-29 02:39:12.459901 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-29 02:39:12.459913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-29 02:39:12.459925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-29 02:39:12.459946 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000222 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 02:39:48.000436 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000448 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 02:39:48.000521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000532 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000583 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000595 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 02:39:48.000615 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000625 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000634 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 02:39:48.000673 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000683 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000702 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000712 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000722 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 02:39:48.000732 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 02:39:48.000742 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 02:39:48.000755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 02:39:48.000767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 02:39:48.000792 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 02:39:48.000804 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 02:39:48.000816 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-29 02:39:48.000852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-29 02:39:48.000864 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-29 02:39:48.000879 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-29 02:39:48.000889 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-29 02:39:48.000899 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-29 02:39:48.000908 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 02:39:48.000918 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 02:39:48.000928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 02:39:48.000937 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 02:39:48.000947 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 02:39:48.000957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 02:39:48.000967 | orchestrator | 2026-03-29 02:39:48.000977 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.000987 | orchestrator | Sunday 29 March 2026 02:39:11 +0000 (0:00:19.027) 0:00:31.418 ********** 2026-03-29 02:39:48.000997 | orchestrator | 2026-03-29 02:39:48.001007 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.001017 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.289) 0:00:31.708 ********** 2026-03-29 02:39:48.001026 | orchestrator | 2026-03-29 02:39:48.001036 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.001046 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.067) 0:00:31.776 ********** 2026-03-29 02:39:48.001055 | orchestrator | 2026-03-29 02:39:48.001065 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.001075 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.067) 0:00:31.843 ********** 2026-03-29 02:39:48.001084 | orchestrator | 2026-03-29 02:39:48.001094 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.001104 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.067) 0:00:31.911 ********** 2026-03-29 02:39:48.001113 | orchestrator | 2026-03-29 02:39:48.001123 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 02:39:48.001133 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.066) 0:00:31.977 ********** 2026-03-29 02:39:48.001142 | orchestrator | 2026-03-29 02:39:48.001152 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-29 02:39:48.001162 | orchestrator | Sunday 29 March 2026 02:39:12 +0000 (0:00:00.066) 0:00:32.044 ********** 2026-03-29 02:39:48.001172 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:39:48.001182 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:39:48.001192 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:39:48.001201 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:48.001211 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:48.001220 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:48.001230 | orchestrator | 2026-03-29 02:39:48.001239 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-29 02:39:48.001249 | orchestrator | Sunday 29 March 2026 02:39:14 +0000 (0:00:01.626) 0:00:33.670 ********** 2026-03-29 02:39:48.001265 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:39:48.001276 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:39:48.001285 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:39:48.001295 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:39:48.001305 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:39:48.001315 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:39:48.001324 | orchestrator | 2026-03-29 02:39:48.001334 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-29 02:39:48.001344 | orchestrator | 2026-03-29 02:39:48.001354 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 02:39:48.001364 | orchestrator | Sunday 29 March 2026 02:39:45 +0000 (0:00:31.690) 0:01:05.361 ********** 2026-03-29 02:39:48.001374 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:39:48.001384 | orchestrator | 2026-03-29 02:39:48.001394 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 02:39:48.001418 | orchestrator | Sunday 29 March 2026 02:39:46 +0000 (0:00:00.736) 0:01:06.097 ********** 2026-03-29 02:39:48.001428 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:39:48.001438 | orchestrator | 2026-03-29 02:39:48.001448 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-29 02:39:48.001458 | orchestrator | Sunday 29 March 2026 02:39:47 +0000 (0:00:00.511) 0:01:06.609 ********** 2026-03-29 02:39:48.001468 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:48.001477 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:48.001487 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:48.001497 | orchestrator | 2026-03-29 02:39:48.001507 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-29 02:39:48.001522 | orchestrator | Sunday 29 March 2026 02:39:47 +0000 (0:00:00.971) 0:01:07.581 ********** 2026-03-29 02:39:59.666457 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.666545 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.666556 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.666611 | orchestrator | 2026-03-29 02:39:59.666621 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-29 02:39:59.666641 | orchestrator | Sunday 29 March 2026 02:39:48 +0000 (0:00:00.336) 0:01:07.917 ********** 2026-03-29 02:39:59.666648 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.666655 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.666662 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.666669 | orchestrator | 2026-03-29 02:39:59.666676 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-29 02:39:59.666683 | orchestrator | Sunday 29 March 2026 02:39:48 +0000 (0:00:00.343) 0:01:08.261 ********** 2026-03-29 02:39:59.666689 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.666696 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.666703 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.666709 | orchestrator | 2026-03-29 02:39:59.666716 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-29 02:39:59.666723 | orchestrator | Sunday 29 March 2026 02:39:48 +0000 (0:00:00.316) 0:01:08.578 ********** 2026-03-29 02:39:59.666729 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.666736 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.666743 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.666749 | orchestrator | 2026-03-29 02:39:59.666756 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-29 02:39:59.666762 | orchestrator | Sunday 29 March 2026 02:39:49 +0000 (0:00:00.570) 0:01:09.148 ********** 2026-03-29 02:39:59.666769 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666776 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.666783 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.666790 | orchestrator | 2026-03-29 02:39:59.666796 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-29 02:39:59.666822 | orchestrator | Sunday 29 March 2026 02:39:49 +0000 (0:00:00.297) 0:01:09.446 ********** 2026-03-29 02:39:59.666829 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666836 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.666842 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.666849 | orchestrator | 2026-03-29 02:39:59.666855 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-29 02:39:59.666862 | orchestrator | Sunday 29 March 2026 02:39:50 +0000 (0:00:00.297) 0:01:09.743 ********** 2026-03-29 02:39:59.666869 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666875 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.666882 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.666888 | orchestrator | 2026-03-29 02:39:59.666895 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-29 02:39:59.666902 | orchestrator | Sunday 29 March 2026 02:39:50 +0000 (0:00:00.316) 0:01:10.060 ********** 2026-03-29 02:39:59.666908 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666915 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.666921 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.666928 | orchestrator | 2026-03-29 02:39:59.666935 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-29 02:39:59.666941 | orchestrator | Sunday 29 March 2026 02:39:50 +0000 (0:00:00.279) 0:01:10.339 ********** 2026-03-29 02:39:59.666948 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666955 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.666962 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.666968 | orchestrator | 2026-03-29 02:39:59.666975 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-29 02:39:59.666982 | orchestrator | Sunday 29 March 2026 02:39:51 +0000 (0:00:00.570) 0:01:10.910 ********** 2026-03-29 02:39:59.666988 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.666995 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667001 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667008 | orchestrator | 2026-03-29 02:39:59.667014 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-29 02:39:59.667022 | orchestrator | Sunday 29 March 2026 02:39:51 +0000 (0:00:00.301) 0:01:11.211 ********** 2026-03-29 02:39:59.667030 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667037 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667044 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667052 | orchestrator | 2026-03-29 02:39:59.667060 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-29 02:39:59.667067 | orchestrator | Sunday 29 March 2026 02:39:51 +0000 (0:00:00.312) 0:01:11.523 ********** 2026-03-29 02:39:59.667075 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667082 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667089 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667099 | orchestrator | 2026-03-29 02:39:59.667110 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-29 02:39:59.667122 | orchestrator | Sunday 29 March 2026 02:39:52 +0000 (0:00:00.300) 0:01:11.824 ********** 2026-03-29 02:39:59.667133 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667143 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667155 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667168 | orchestrator | 2026-03-29 02:39:59.667179 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-29 02:39:59.667192 | orchestrator | Sunday 29 March 2026 02:39:52 +0000 (0:00:00.544) 0:01:12.369 ********** 2026-03-29 02:39:59.667200 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667216 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667224 | orchestrator | 2026-03-29 02:39:59.667232 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-29 02:39:59.667240 | orchestrator | Sunday 29 March 2026 02:39:53 +0000 (0:00:00.374) 0:01:12.743 ********** 2026-03-29 02:39:59.667254 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667262 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667269 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667277 | orchestrator | 2026-03-29 02:39:59.667284 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-29 02:39:59.667292 | orchestrator | Sunday 29 March 2026 02:39:53 +0000 (0:00:00.300) 0:01:13.044 ********** 2026-03-29 02:39:59.667312 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667328 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667336 | orchestrator | 2026-03-29 02:39:59.667344 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 02:39:59.667356 | orchestrator | Sunday 29 March 2026 02:39:53 +0000 (0:00:00.311) 0:01:13.355 ********** 2026-03-29 02:39:59.667364 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:39:59.667372 | orchestrator | 2026-03-29 02:39:59.667380 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-29 02:39:59.667387 | orchestrator | Sunday 29 March 2026 02:39:54 +0000 (0:00:00.829) 0:01:14.185 ********** 2026-03-29 02:39:59.667394 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.667400 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.667407 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.667414 | orchestrator | 2026-03-29 02:39:59.667420 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-29 02:39:59.667427 | orchestrator | Sunday 29 March 2026 02:39:55 +0000 (0:00:00.454) 0:01:14.639 ********** 2026-03-29 02:39:59.667433 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:39:59.667440 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:39:59.667447 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:39:59.667453 | orchestrator | 2026-03-29 02:39:59.667460 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-29 02:39:59.667467 | orchestrator | Sunday 29 March 2026 02:39:55 +0000 (0:00:00.446) 0:01:15.086 ********** 2026-03-29 02:39:59.667473 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667480 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667486 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667493 | orchestrator | 2026-03-29 02:39:59.667499 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-29 02:39:59.667506 | orchestrator | Sunday 29 March 2026 02:39:55 +0000 (0:00:00.357) 0:01:15.444 ********** 2026-03-29 02:39:59.667513 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667519 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667526 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667532 | orchestrator | 2026-03-29 02:39:59.667539 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-29 02:39:59.667545 | orchestrator | Sunday 29 March 2026 02:39:56 +0000 (0:00:00.629) 0:01:16.074 ********** 2026-03-29 02:39:59.667552 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667559 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667611 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667618 | orchestrator | 2026-03-29 02:39:59.667625 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-29 02:39:59.667632 | orchestrator | Sunday 29 March 2026 02:39:56 +0000 (0:00:00.355) 0:01:16.430 ********** 2026-03-29 02:39:59.667638 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667645 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667652 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667658 | orchestrator | 2026-03-29 02:39:59.667665 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-29 02:39:59.667671 | orchestrator | Sunday 29 March 2026 02:39:57 +0000 (0:00:00.367) 0:01:16.797 ********** 2026-03-29 02:39:59.667678 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667692 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667699 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667706 | orchestrator | 2026-03-29 02:39:59.667713 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-29 02:39:59.667719 | orchestrator | Sunday 29 March 2026 02:39:57 +0000 (0:00:00.331) 0:01:17.129 ********** 2026-03-29 02:39:59.667726 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:39:59.667732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:39:59.667739 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:39:59.667745 | orchestrator | 2026-03-29 02:39:59.667752 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 02:39:59.667759 | orchestrator | Sunday 29 March 2026 02:39:58 +0000 (0:00:00.595) 0:01:17.724 ********** 2026-03-29 02:39:59.667768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:59.667777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:59.667784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:39:59.667802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369933 | orchestrator | 2026-03-29 02:40:06.369947 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 02:40:06.369959 | orchestrator | Sunday 29 March 2026 02:39:59 +0000 (0:00:01.523) 0:01:19.248 ********** 2026-03-29 02:40:06.369968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.369990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370114 | orchestrator | 2026-03-29 02:40:06.370122 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 02:40:06.370129 | orchestrator | Sunday 29 March 2026 02:40:03 +0000 (0:00:04.022) 0:01:23.271 ********** 2026-03-29 02:40:06.370136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:06.370181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.873234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.873362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.873377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.873387 | orchestrator | 2026-03-29 02:40:30.873398 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:30.873408 | orchestrator | Sunday 29 March 2026 02:40:05 +0000 (0:00:02.226) 0:01:25.498 ********** 2026-03-29 02:40:30.873417 | orchestrator | 2026-03-29 02:40:30.873427 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:30.873436 | orchestrator | Sunday 29 March 2026 02:40:05 +0000 (0:00:00.065) 0:01:25.563 ********** 2026-03-29 02:40:30.873444 | orchestrator | 2026-03-29 02:40:30.873453 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:30.873462 | orchestrator | Sunday 29 March 2026 02:40:06 +0000 (0:00:00.320) 0:01:25.883 ********** 2026-03-29 02:40:30.873471 | orchestrator | 2026-03-29 02:40:30.873480 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 02:40:30.873489 | orchestrator | Sunday 29 March 2026 02:40:06 +0000 (0:00:00.071) 0:01:25.955 ********** 2026-03-29 02:40:30.873498 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:40:30.873509 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:40:30.873518 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:40:30.873526 | orchestrator | 2026-03-29 02:40:30.873535 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 02:40:30.873544 | orchestrator | Sunday 29 March 2026 02:40:13 +0000 (0:00:06.653) 0:01:32.608 ********** 2026-03-29 02:40:30.873553 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:40:30.873562 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:40:30.873570 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:40:30.873579 | orchestrator | 2026-03-29 02:40:30.873678 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 02:40:30.873700 | orchestrator | Sunday 29 March 2026 02:40:15 +0000 (0:00:02.789) 0:01:35.397 ********** 2026-03-29 02:40:30.873711 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:40:30.873720 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:40:30.873729 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:40:30.873737 | orchestrator | 2026-03-29 02:40:30.873746 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 02:40:30.873755 | orchestrator | Sunday 29 March 2026 02:40:23 +0000 (0:00:07.840) 0:01:43.238 ********** 2026-03-29 02:40:30.873764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:40:30.873775 | orchestrator | 2026-03-29 02:40:30.873785 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 02:40:30.873795 | orchestrator | Sunday 29 March 2026 02:40:23 +0000 (0:00:00.147) 0:01:43.385 ********** 2026-03-29 02:40:30.873805 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:40:30.873816 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:40:30.873832 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:40:30.873848 | orchestrator | 2026-03-29 02:40:30.873862 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 02:40:30.873877 | orchestrator | Sunday 29 March 2026 02:40:24 +0000 (0:00:01.053) 0:01:44.439 ********** 2026-03-29 02:40:30.873891 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:40:30.873920 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:40:30.873936 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:40:30.873953 | orchestrator | 2026-03-29 02:40:30.873970 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 02:40:30.873985 | orchestrator | Sunday 29 March 2026 02:40:25 +0000 (0:00:00.625) 0:01:45.065 ********** 2026-03-29 02:40:30.873999 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:40:30.874079 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:40:30.874094 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:40:30.874103 | orchestrator | 2026-03-29 02:40:30.874112 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 02:40:30.874121 | orchestrator | Sunday 29 March 2026 02:40:26 +0000 (0:00:00.789) 0:01:45.854 ********** 2026-03-29 02:40:30.874162 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:40:30.874172 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:40:30.874189 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:40:30.874198 | orchestrator | 2026-03-29 02:40:30.874207 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 02:40:30.874216 | orchestrator | Sunday 29 March 2026 02:40:26 +0000 (0:00:00.654) 0:01:46.508 ********** 2026-03-29 02:40:30.874225 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:40:30.874234 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:40:30.874260 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:40:30.874270 | orchestrator | 2026-03-29 02:40:30.874279 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 02:40:30.874287 | orchestrator | Sunday 29 March 2026 02:40:28 +0000 (0:00:01.302) 0:01:47.811 ********** 2026-03-29 02:40:30.874296 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:40:30.874305 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:40:30.874314 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:40:30.874322 | orchestrator | 2026-03-29 02:40:30.874331 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-29 02:40:30.874340 | orchestrator | Sunday 29 March 2026 02:40:29 +0000 (0:00:00.793) 0:01:48.604 ********** 2026-03-29 02:40:30.874349 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:40:30.874358 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:40:30.874366 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:40:30.874375 | orchestrator | 2026-03-29 02:40:30.874383 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 02:40:30.874392 | orchestrator | Sunday 29 March 2026 02:40:29 +0000 (0:00:00.304) 0:01:48.909 ********** 2026-03-29 02:40:30.874404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874424 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874433 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874452 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874461 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874484 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:30.874500 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385476 | orchestrator | 2026-03-29 02:40:38.385556 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 02:40:38.385563 | orchestrator | Sunday 29 March 2026 02:40:30 +0000 (0:00:01.542) 0:01:50.451 ********** 2026-03-29 02:40:38.385569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385580 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385660 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385681 | orchestrator | 2026-03-29 02:40:38.385685 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 02:40:38.385689 | orchestrator | Sunday 29 March 2026 02:40:34 +0000 (0:00:04.108) 0:01:54.559 ********** 2026-03-29 02:40:38.385704 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385708 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385731 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 02:40:38.385745 | orchestrator | 2026-03-29 02:40:38.385749 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:38.385753 | orchestrator | Sunday 29 March 2026 02:40:38 +0000 (0:00:03.187) 0:01:57.747 ********** 2026-03-29 02:40:38.385756 | orchestrator | 2026-03-29 02:40:38.385760 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:38.385764 | orchestrator | Sunday 29 March 2026 02:40:38 +0000 (0:00:00.067) 0:01:57.814 ********** 2026-03-29 02:40:38.385768 | orchestrator | 2026-03-29 02:40:38.385771 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 02:40:38.385775 | orchestrator | Sunday 29 March 2026 02:40:38 +0000 (0:00:00.069) 0:01:57.884 ********** 2026-03-29 02:40:38.385779 | orchestrator | 2026-03-29 02:40:38.385786 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 02:41:02.711264 | orchestrator | Sunday 29 March 2026 02:40:38 +0000 (0:00:00.068) 0:01:57.953 ********** 2026-03-29 02:41:02.711370 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:41:02.711386 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:41:02.711396 | orchestrator | 2026-03-29 02:41:02.711406 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 02:41:02.711416 | orchestrator | Sunday 29 March 2026 02:40:44 +0000 (0:00:06.225) 0:02:04.178 ********** 2026-03-29 02:41:02.711425 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:41:02.711434 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:41:02.711443 | orchestrator | 2026-03-29 02:41:02.711452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 02:41:02.711484 | orchestrator | Sunday 29 March 2026 02:40:50 +0000 (0:00:06.200) 0:02:10.379 ********** 2026-03-29 02:41:02.711494 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:41:02.711503 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:41:02.711511 | orchestrator | 2026-03-29 02:41:02.711520 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 02:41:02.711530 | orchestrator | Sunday 29 March 2026 02:40:56 +0000 (0:00:06.200) 0:02:16.579 ********** 2026-03-29 02:41:02.711538 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:41:02.711547 | orchestrator | 2026-03-29 02:41:02.711556 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 02:41:02.711564 | orchestrator | Sunday 29 March 2026 02:40:57 +0000 (0:00:00.143) 0:02:16.723 ********** 2026-03-29 02:41:02.711573 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:02.711583 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:02.711592 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:02.711600 | orchestrator | 2026-03-29 02:41:02.711609 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 02:41:02.711667 | orchestrator | Sunday 29 March 2026 02:40:58 +0000 (0:00:01.049) 0:02:17.772 ********** 2026-03-29 02:41:02.711678 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:41:02.711687 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:41:02.711696 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:41:02.711705 | orchestrator | 2026-03-29 02:41:02.711714 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 02:41:02.711722 | orchestrator | Sunday 29 March 2026 02:40:58 +0000 (0:00:00.647) 0:02:18.421 ********** 2026-03-29 02:41:02.711732 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:02.711740 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:02.711749 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:02.711758 | orchestrator | 2026-03-29 02:41:02.711767 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 02:41:02.711776 | orchestrator | Sunday 29 March 2026 02:40:59 +0000 (0:00:00.750) 0:02:19.171 ********** 2026-03-29 02:41:02.711785 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:41:02.711796 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:41:02.711806 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:41:02.711816 | orchestrator | 2026-03-29 02:41:02.711827 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 02:41:02.711837 | orchestrator | Sunday 29 March 2026 02:41:00 +0000 (0:00:00.645) 0:02:19.816 ********** 2026-03-29 02:41:02.711848 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:02.711858 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:02.711868 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:02.711878 | orchestrator | 2026-03-29 02:41:02.711889 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 02:41:02.711899 | orchestrator | Sunday 29 March 2026 02:41:01 +0000 (0:00:01.107) 0:02:20.924 ********** 2026-03-29 02:41:02.711909 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:02.711919 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:02.711929 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:02.711940 | orchestrator | 2026-03-29 02:41:02.711949 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:41:02.711961 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 02:41:02.711973 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 02:41:02.711984 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 02:41:02.711994 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:41:02.712013 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:41:02.712022 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:41:02.712031 | orchestrator | 2026-03-29 02:41:02.712040 | orchestrator | 2026-03-29 02:41:02.712061 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:41:02.712071 | orchestrator | Sunday 29 March 2026 02:41:02 +0000 (0:00:00.928) 0:02:21.852 ********** 2026-03-29 02:41:02.712079 | orchestrator | =============================================================================== 2026-03-29 02:41:02.712088 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.69s 2026-03-29 02:41:02.712097 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.03s 2026-03-29 02:41:02.712106 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.04s 2026-03-29 02:41:02.712114 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.88s 2026-03-29 02:41:02.712123 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.99s 2026-03-29 02:41:02.712148 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.11s 2026-03-29 02:41:02.712158 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2026-03-29 02:41:02.712166 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.19s 2026-03-29 02:41:02.712175 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.54s 2026-03-29 02:41:02.712184 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.23s 2026-03-29 02:41:02.712193 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.63s 2026-03-29 02:41:02.712201 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2026-03-29 02:41:02.712210 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2026-03-29 02:41:02.712219 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.52s 2026-03-29 02:41:02.712228 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.50s 2026-03-29 02:41:02.712236 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.47s 2026-03-29 02:41:02.712245 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.30s 2026-03-29 02:41:02.712254 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.27s 2026-03-29 02:41:02.712263 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.19s 2026-03-29 02:41:02.712271 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.17s 2026-03-29 02:41:03.071931 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 02:41:03.072003 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-29 02:41:05.223177 | orchestrator | 2026-03-29 02:41:05 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-29 02:41:15.366762 | orchestrator | 2026-03-29 02:41:15 | INFO  | Task ada892b3-eddb-48c8-a5b3-92345182e682 (wipe-partitions) was prepared for execution. 2026-03-29 02:41:15.366872 | orchestrator | 2026-03-29 02:41:15 | INFO  | It takes a moment until task ada892b3-eddb-48c8-a5b3-92345182e682 (wipe-partitions) has been started and output is visible here. 2026-03-29 02:41:27.334094 | orchestrator | 2026-03-29 02:41:27.334191 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-29 02:41:27.334206 | orchestrator | 2026-03-29 02:41:27.334216 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-29 02:41:27.334227 | orchestrator | Sunday 29 March 2026 02:41:19 +0000 (0:00:00.118) 0:00:00.118 ********** 2026-03-29 02:41:27.334256 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:41:27.334268 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:41:27.334278 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:41:27.334288 | orchestrator | 2026-03-29 02:41:27.334298 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-29 02:41:27.334308 | orchestrator | Sunday 29 March 2026 02:41:19 +0000 (0:00:00.583) 0:00:00.702 ********** 2026-03-29 02:41:27.334317 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:41:27.334327 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:41:27.334336 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:41:27.334346 | orchestrator | 2026-03-29 02:41:27.334356 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-29 02:41:27.334366 | orchestrator | Sunday 29 March 2026 02:41:20 +0000 (0:00:00.327) 0:00:01.030 ********** 2026-03-29 02:41:27.334375 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:41:27.334385 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:41:27.334395 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:41:27.334405 | orchestrator | 2026-03-29 02:41:27.334414 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-29 02:41:27.334424 | orchestrator | Sunday 29 March 2026 02:41:20 +0000 (0:00:00.562) 0:00:01.592 ********** 2026-03-29 02:41:27.334434 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:41:27.334443 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:41:27.334454 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:41:27.334463 | orchestrator | 2026-03-29 02:41:27.334473 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-29 02:41:27.334483 | orchestrator | Sunday 29 March 2026 02:41:20 +0000 (0:00:00.229) 0:00:01.822 ********** 2026-03-29 02:41:27.334492 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 02:41:27.334503 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 02:41:27.334513 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 02:41:27.334522 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 02:41:27.334532 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 02:41:27.334541 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 02:41:27.334551 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 02:41:27.334570 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 02:41:27.334580 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 02:41:27.334589 | orchestrator | 2026-03-29 02:41:27.334599 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-29 02:41:27.334610 | orchestrator | Sunday 29 March 2026 02:41:22 +0000 (0:00:01.280) 0:00:03.102 ********** 2026-03-29 02:41:27.334621 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 02:41:27.334633 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 02:41:27.334682 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 02:41:27.334693 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 02:41:27.334704 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 02:41:27.334714 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 02:41:27.334725 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 02:41:27.334736 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 02:41:27.334747 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 02:41:27.334758 | orchestrator | 2026-03-29 02:41:27.334769 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-29 02:41:27.334780 | orchestrator | Sunday 29 March 2026 02:41:23 +0000 (0:00:01.531) 0:00:04.634 ********** 2026-03-29 02:41:27.334791 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 02:41:27.334802 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 02:41:27.334813 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 02:41:27.334824 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 02:41:27.334842 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 02:41:27.334854 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 02:41:27.334865 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 02:41:27.334876 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 02:41:27.334887 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 02:41:27.334898 | orchestrator | 2026-03-29 02:41:27.334909 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-29 02:41:27.334920 | orchestrator | Sunday 29 March 2026 02:41:25 +0000 (0:00:02.115) 0:00:06.750 ********** 2026-03-29 02:41:27.334932 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:41:27.334943 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:41:27.334955 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:41:27.334966 | orchestrator | 2026-03-29 02:41:27.334976 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-29 02:41:27.334986 | orchestrator | Sunday 29 March 2026 02:41:26 +0000 (0:00:00.619) 0:00:07.369 ********** 2026-03-29 02:41:27.334996 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:41:27.335005 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:41:27.335015 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:41:27.335024 | orchestrator | 2026-03-29 02:41:27.335034 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:41:27.335045 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:27.335056 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:27.335082 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:27.335092 | orchestrator | 2026-03-29 02:41:27.335102 | orchestrator | 2026-03-29 02:41:27.335112 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:41:27.335121 | orchestrator | Sunday 29 March 2026 02:41:27 +0000 (0:00:00.694) 0:00:08.064 ********** 2026-03-29 02:41:27.335131 | orchestrator | =============================================================================== 2026-03-29 02:41:27.335141 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.12s 2026-03-29 02:41:27.335150 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.53s 2026-03-29 02:41:27.335160 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-03-29 02:41:27.335169 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-03-29 02:41:27.335179 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-29 02:41:27.335188 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-03-29 02:41:27.335198 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-03-29 02:41:27.335208 | orchestrator | Remove all rook related logical devices --------------------------------- 0.33s 2026-03-29 02:41:27.335217 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-03-29 02:41:39.551595 | orchestrator | 2026-03-29 02:41:39 | INFO  | Task ab30cbdc-b206-4d09-8fff-0204623b5e9d (facts) was prepared for execution. 2026-03-29 02:41:39.551748 | orchestrator | 2026-03-29 02:41:39 | INFO  | It takes a moment until task ab30cbdc-b206-4d09-8fff-0204623b5e9d (facts) has been started and output is visible here. 2026-03-29 02:41:51.451822 | orchestrator | 2026-03-29 02:41:51.451968 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 02:41:51.451988 | orchestrator | 2026-03-29 02:41:51.452001 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 02:41:51.452013 | orchestrator | Sunday 29 March 2026 02:41:43 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-29 02:41:51.452059 | orchestrator | ok: [testbed-manager] 2026-03-29 02:41:51.452073 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:51.452084 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:51.452095 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:51.452106 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:41:51.452117 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:41:51.452127 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:41:51.452138 | orchestrator | 2026-03-29 02:41:51.452149 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 02:41:51.452161 | orchestrator | Sunday 29 March 2026 02:41:44 +0000 (0:00:01.088) 0:00:01.346 ********** 2026-03-29 02:41:51.452173 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:41:51.452186 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:41:51.452197 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:41:51.452207 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:41:51.452218 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:41:51.452228 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:41:51.452239 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:41:51.452250 | orchestrator | 2026-03-29 02:41:51.452261 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 02:41:51.452271 | orchestrator | 2026-03-29 02:41:51.452282 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 02:41:51.452293 | orchestrator | Sunday 29 March 2026 02:41:45 +0000 (0:00:01.117) 0:00:02.464 ********** 2026-03-29 02:41:51.452304 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:41:51.452315 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:41:51.452326 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:41:51.452352 | orchestrator | ok: [testbed-manager] 2026-03-29 02:41:51.452365 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:41:51.452378 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:41:51.452390 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:41:51.452402 | orchestrator | 2026-03-29 02:41:51.452414 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 02:41:51.452427 | orchestrator | 2026-03-29 02:41:51.452439 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 02:41:51.452452 | orchestrator | Sunday 29 March 2026 02:41:50 +0000 (0:00:04.856) 0:00:07.320 ********** 2026-03-29 02:41:51.452464 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:41:51.452477 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:41:51.452489 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:41:51.452502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:41:51.452515 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:41:51.452526 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:41:51.452539 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:41:51.452551 | orchestrator | 2026-03-29 02:41:51.452570 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:41:51.452590 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452705 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452733 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452751 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452769 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452788 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452822 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:41:51.452841 | orchestrator | 2026-03-29 02:41:51.452859 | orchestrator | 2026-03-29 02:41:51.452877 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:41:51.452894 | orchestrator | Sunday 29 March 2026 02:41:51 +0000 (0:00:00.501) 0:00:07.822 ********** 2026-03-29 02:41:51.452912 | orchestrator | =============================================================================== 2026-03-29 02:41:51.452929 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2026-03-29 02:41:51.452945 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2026-03-29 02:41:51.452963 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2026-03-29 02:41:51.452980 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-29 02:41:53.548530 | orchestrator | 2026-03-29 02:41:53 | INFO  | Task 7a900374-4bc7-4161-b20c-407caa428fc8 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-29 02:41:53.548631 | orchestrator | 2026-03-29 02:41:53 | INFO  | It takes a moment until task 7a900374-4bc7-4161-b20c-407caa428fc8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-29 02:42:04.822685 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 02:42:04.822785 | orchestrator | 2.16.14 2026-03-29 02:42:04.822797 | orchestrator | 2026-03-29 02:42:04.822804 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 02:42:04.822813 | orchestrator | 2026-03-29 02:42:04.822819 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:42:04.822826 | orchestrator | Sunday 29 March 2026 02:41:57 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-03-29 02:42:04.822833 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:04.822840 | orchestrator | 2026-03-29 02:42:04.822862 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:42:04.822869 | orchestrator | Sunday 29 March 2026 02:41:57 +0000 (0:00:00.253) 0:00:00.553 ********** 2026-03-29 02:42:04.822876 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:42:04.822883 | orchestrator | 2026-03-29 02:42:04.822890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.822895 | orchestrator | Sunday 29 March 2026 02:41:58 +0000 (0:00:00.227) 0:00:00.781 ********** 2026-03-29 02:42:04.822899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 02:42:04.822903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 02:42:04.822907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 02:42:04.822911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 02:42:04.822914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 02:42:04.822918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 02:42:04.822922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 02:42:04.822926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 02:42:04.822929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 02:42:04.822933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 02:42:04.822937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 02:42:04.822941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 02:42:04.822961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 02:42:04.822965 | orchestrator | 2026-03-29 02:42:04.822969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.822972 | orchestrator | Sunday 29 March 2026 02:41:58 +0000 (0:00:00.435) 0:00:01.217 ********** 2026-03-29 02:42:04.822976 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.822981 | orchestrator | 2026-03-29 02:42:04.822985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.822989 | orchestrator | Sunday 29 March 2026 02:41:58 +0000 (0:00:00.199) 0:00:01.417 ********** 2026-03-29 02:42:04.822992 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.822996 | orchestrator | 2026-03-29 02:42:04.823000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823003 | orchestrator | Sunday 29 March 2026 02:41:58 +0000 (0:00:00.198) 0:00:01.615 ********** 2026-03-29 02:42:04.823007 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823011 | orchestrator | 2026-03-29 02:42:04.823015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823018 | orchestrator | Sunday 29 March 2026 02:41:59 +0000 (0:00:00.189) 0:00:01.805 ********** 2026-03-29 02:42:04.823022 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823026 | orchestrator | 2026-03-29 02:42:04.823030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823033 | orchestrator | Sunday 29 March 2026 02:41:59 +0000 (0:00:00.203) 0:00:02.008 ********** 2026-03-29 02:42:04.823037 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823041 | orchestrator | 2026-03-29 02:42:04.823045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823048 | orchestrator | Sunday 29 March 2026 02:41:59 +0000 (0:00:00.212) 0:00:02.220 ********** 2026-03-29 02:42:04.823052 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823056 | orchestrator | 2026-03-29 02:42:04.823060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823063 | orchestrator | Sunday 29 March 2026 02:41:59 +0000 (0:00:00.195) 0:00:02.416 ********** 2026-03-29 02:42:04.823067 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823071 | orchestrator | 2026-03-29 02:42:04.823074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823078 | orchestrator | Sunday 29 March 2026 02:41:59 +0000 (0:00:00.198) 0:00:02.614 ********** 2026-03-29 02:42:04.823082 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823086 | orchestrator | 2026-03-29 02:42:04.823089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823093 | orchestrator | Sunday 29 March 2026 02:42:00 +0000 (0:00:00.197) 0:00:02.812 ********** 2026-03-29 02:42:04.823097 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548) 2026-03-29 02:42:04.823102 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548) 2026-03-29 02:42:04.823106 | orchestrator | 2026-03-29 02:42:04.823110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823125 | orchestrator | Sunday 29 March 2026 02:42:00 +0000 (0:00:00.605) 0:00:03.417 ********** 2026-03-29 02:42:04.823129 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472) 2026-03-29 02:42:04.823133 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472) 2026-03-29 02:42:04.823137 | orchestrator | 2026-03-29 02:42:04.823141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823144 | orchestrator | Sunday 29 March 2026 02:42:01 +0000 (0:00:00.573) 0:00:03.991 ********** 2026-03-29 02:42:04.823151 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249) 2026-03-29 02:42:04.823160 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249) 2026-03-29 02:42:04.823163 | orchestrator | 2026-03-29 02:42:04.823167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823171 | orchestrator | Sunday 29 March 2026 02:42:02 +0000 (0:00:00.761) 0:00:04.753 ********** 2026-03-29 02:42:04.823177 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e) 2026-03-29 02:42:04.823183 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e) 2026-03-29 02:42:04.823192 | orchestrator | 2026-03-29 02:42:04.823199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:04.823204 | orchestrator | Sunday 29 March 2026 02:42:02 +0000 (0:00:00.401) 0:00:05.155 ********** 2026-03-29 02:42:04.823210 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:42:04.823216 | orchestrator | 2026-03-29 02:42:04.823222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823227 | orchestrator | Sunday 29 March 2026 02:42:02 +0000 (0:00:00.339) 0:00:05.495 ********** 2026-03-29 02:42:04.823233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 02:42:04.823239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 02:42:04.823245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 02:42:04.823253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 02:42:04.823261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 02:42:04.823266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 02:42:04.823272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 02:42:04.823278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 02:42:04.823284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 02:42:04.823290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 02:42:04.823296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 02:42:04.823302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 02:42:04.823308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 02:42:04.823315 | orchestrator | 2026-03-29 02:42:04.823321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823327 | orchestrator | Sunday 29 March 2026 02:42:03 +0000 (0:00:00.346) 0:00:05.841 ********** 2026-03-29 02:42:04.823332 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823338 | orchestrator | 2026-03-29 02:42:04.823344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823350 | orchestrator | Sunday 29 March 2026 02:42:03 +0000 (0:00:00.203) 0:00:06.044 ********** 2026-03-29 02:42:04.823356 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823362 | orchestrator | 2026-03-29 02:42:04.823368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823374 | orchestrator | Sunday 29 March 2026 02:42:03 +0000 (0:00:00.199) 0:00:06.244 ********** 2026-03-29 02:42:04.823379 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823385 | orchestrator | 2026-03-29 02:42:04.823392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823397 | orchestrator | Sunday 29 March 2026 02:42:03 +0000 (0:00:00.203) 0:00:06.447 ********** 2026-03-29 02:42:04.823401 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823410 | orchestrator | 2026-03-29 02:42:04.823414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823418 | orchestrator | Sunday 29 March 2026 02:42:03 +0000 (0:00:00.194) 0:00:06.642 ********** 2026-03-29 02:42:04.823422 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823426 | orchestrator | 2026-03-29 02:42:04.823429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823433 | orchestrator | Sunday 29 March 2026 02:42:04 +0000 (0:00:00.215) 0:00:06.857 ********** 2026-03-29 02:42:04.823437 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823440 | orchestrator | 2026-03-29 02:42:04.823444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:04.823448 | orchestrator | Sunday 29 March 2026 02:42:04 +0000 (0:00:00.489) 0:00:07.347 ********** 2026-03-29 02:42:04.823452 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:04.823455 | orchestrator | 2026-03-29 02:42:04.823464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767733 | orchestrator | Sunday 29 March 2026 02:42:04 +0000 (0:00:00.194) 0:00:07.541 ********** 2026-03-29 02:42:11.767812 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767820 | orchestrator | 2026-03-29 02:42:11.767825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767829 | orchestrator | Sunday 29 March 2026 02:42:05 +0000 (0:00:00.194) 0:00:07.736 ********** 2026-03-29 02:42:11.767834 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 02:42:11.767838 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 02:42:11.767843 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 02:42:11.767858 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 02:42:11.767862 | orchestrator | 2026-03-29 02:42:11.767866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767870 | orchestrator | Sunday 29 March 2026 02:42:05 +0000 (0:00:00.626) 0:00:08.362 ********** 2026-03-29 02:42:11.767874 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767878 | orchestrator | 2026-03-29 02:42:11.767882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767886 | orchestrator | Sunday 29 March 2026 02:42:05 +0000 (0:00:00.206) 0:00:08.568 ********** 2026-03-29 02:42:11.767889 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767893 | orchestrator | 2026-03-29 02:42:11.767897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767901 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.207) 0:00:08.776 ********** 2026-03-29 02:42:11.767905 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767909 | orchestrator | 2026-03-29 02:42:11.767913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:11.767917 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.196) 0:00:08.973 ********** 2026-03-29 02:42:11.767920 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767924 | orchestrator | 2026-03-29 02:42:11.767928 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 02:42:11.767932 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.190) 0:00:09.164 ********** 2026-03-29 02:42:11.767936 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-29 02:42:11.767940 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-29 02:42:11.767943 | orchestrator | 2026-03-29 02:42:11.767947 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 02:42:11.767951 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.159) 0:00:09.323 ********** 2026-03-29 02:42:11.767955 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767958 | orchestrator | 2026-03-29 02:42:11.767962 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 02:42:11.767966 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.123) 0:00:09.447 ********** 2026-03-29 02:42:11.767984 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.767989 | orchestrator | 2026-03-29 02:42:11.767992 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 02:42:11.767996 | orchestrator | Sunday 29 March 2026 02:42:06 +0000 (0:00:00.132) 0:00:09.579 ********** 2026-03-29 02:42:11.768000 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768004 | orchestrator | 2026-03-29 02:42:11.768008 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 02:42:11.768011 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.306) 0:00:09.886 ********** 2026-03-29 02:42:11.768015 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:42:11.768019 | orchestrator | 2026-03-29 02:42:11.768023 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 02:42:11.768027 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.138) 0:00:10.025 ********** 2026-03-29 02:42:11.768031 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a86fe60-1e0e-551e-abcc-872f54df7e3c'}}) 2026-03-29 02:42:11.768035 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '09734191-f9bf-5626-be02-fa226447c12f'}}) 2026-03-29 02:42:11.768039 | orchestrator | 2026-03-29 02:42:11.768043 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 02:42:11.768047 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.163) 0:00:10.188 ********** 2026-03-29 02:42:11.768051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a86fe60-1e0e-551e-abcc-872f54df7e3c'}})  2026-03-29 02:42:11.768057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '09734191-f9bf-5626-be02-fa226447c12f'}})  2026-03-29 02:42:11.768060 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768064 | orchestrator | 2026-03-29 02:42:11.768068 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 02:42:11.768072 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.157) 0:00:10.345 ********** 2026-03-29 02:42:11.768075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a86fe60-1e0e-551e-abcc-872f54df7e3c'}})  2026-03-29 02:42:11.768079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '09734191-f9bf-5626-be02-fa226447c12f'}})  2026-03-29 02:42:11.768083 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768087 | orchestrator | 2026-03-29 02:42:11.768090 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 02:42:11.768094 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.193) 0:00:10.539 ********** 2026-03-29 02:42:11.768098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a86fe60-1e0e-551e-abcc-872f54df7e3c'}})  2026-03-29 02:42:11.768112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '09734191-f9bf-5626-be02-fa226447c12f'}})  2026-03-29 02:42:11.768117 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768120 | orchestrator | 2026-03-29 02:42:11.768125 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 02:42:11.768129 | orchestrator | Sunday 29 March 2026 02:42:07 +0000 (0:00:00.144) 0:00:10.683 ********** 2026-03-29 02:42:11.768133 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:42:11.768136 | orchestrator | 2026-03-29 02:42:11.768140 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 02:42:11.768147 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.137) 0:00:10.821 ********** 2026-03-29 02:42:11.768150 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:42:11.768154 | orchestrator | 2026-03-29 02:42:11.768158 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 02:42:11.768162 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.159) 0:00:10.980 ********** 2026-03-29 02:42:11.768170 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768174 | orchestrator | 2026-03-29 02:42:11.768178 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 02:42:11.768181 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.165) 0:00:11.146 ********** 2026-03-29 02:42:11.768185 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768189 | orchestrator | 2026-03-29 02:42:11.768193 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 02:42:11.768196 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.130) 0:00:11.276 ********** 2026-03-29 02:42:11.768200 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768204 | orchestrator | 2026-03-29 02:42:11.768208 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 02:42:11.768212 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.121) 0:00:11.398 ********** 2026-03-29 02:42:11.768215 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:42:11.768219 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:11.768223 | orchestrator |  "sdb": { 2026-03-29 02:42:11.768227 | orchestrator |  "osd_lvm_uuid": "6a86fe60-1e0e-551e-abcc-872f54df7e3c" 2026-03-29 02:42:11.768231 | orchestrator |  }, 2026-03-29 02:42:11.768235 | orchestrator |  "sdc": { 2026-03-29 02:42:11.768239 | orchestrator |  "osd_lvm_uuid": "09734191-f9bf-5626-be02-fa226447c12f" 2026-03-29 02:42:11.768243 | orchestrator |  } 2026-03-29 02:42:11.768247 | orchestrator |  } 2026-03-29 02:42:11.768250 | orchestrator | } 2026-03-29 02:42:11.768254 | orchestrator | 2026-03-29 02:42:11.768258 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 02:42:11.768262 | orchestrator | Sunday 29 March 2026 02:42:08 +0000 (0:00:00.302) 0:00:11.700 ********** 2026-03-29 02:42:11.768266 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768270 | orchestrator | 2026-03-29 02:42:11.768274 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 02:42:11.768279 | orchestrator | Sunday 29 March 2026 02:42:09 +0000 (0:00:00.139) 0:00:11.840 ********** 2026-03-29 02:42:11.768283 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768288 | orchestrator | 2026-03-29 02:42:11.768292 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 02:42:11.768296 | orchestrator | Sunday 29 March 2026 02:42:09 +0000 (0:00:00.127) 0:00:11.968 ********** 2026-03-29 02:42:11.768301 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:42:11.768305 | orchestrator | 2026-03-29 02:42:11.768310 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 02:42:11.768314 | orchestrator | Sunday 29 March 2026 02:42:09 +0000 (0:00:00.132) 0:00:12.100 ********** 2026-03-29 02:42:11.768319 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 02:42:11.768323 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 02:42:11.768327 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:11.768332 | orchestrator |  "sdb": { 2026-03-29 02:42:11.768336 | orchestrator |  "osd_lvm_uuid": "6a86fe60-1e0e-551e-abcc-872f54df7e3c" 2026-03-29 02:42:11.768341 | orchestrator |  }, 2026-03-29 02:42:11.768345 | orchestrator |  "sdc": { 2026-03-29 02:42:11.768350 | orchestrator |  "osd_lvm_uuid": "09734191-f9bf-5626-be02-fa226447c12f" 2026-03-29 02:42:11.768354 | orchestrator |  } 2026-03-29 02:42:11.768359 | orchestrator |  }, 2026-03-29 02:42:11.768363 | orchestrator |  "lvm_volumes": [ 2026-03-29 02:42:11.768369 | orchestrator |  { 2026-03-29 02:42:11.768376 | orchestrator |  "data": "osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c", 2026-03-29 02:42:11.768382 | orchestrator |  "data_vg": "ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c" 2026-03-29 02:42:11.768387 | orchestrator |  }, 2026-03-29 02:42:11.768393 | orchestrator |  { 2026-03-29 02:42:11.768399 | orchestrator |  "data": "osd-block-09734191-f9bf-5626-be02-fa226447c12f", 2026-03-29 02:42:11.768410 | orchestrator |  "data_vg": "ceph-09734191-f9bf-5626-be02-fa226447c12f" 2026-03-29 02:42:11.768416 | orchestrator |  } 2026-03-29 02:42:11.768422 | orchestrator |  ] 2026-03-29 02:42:11.768429 | orchestrator |  } 2026-03-29 02:42:11.768435 | orchestrator | } 2026-03-29 02:42:11.768442 | orchestrator | 2026-03-29 02:42:11.768448 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 02:42:11.768455 | orchestrator | Sunday 29 March 2026 02:42:09 +0000 (0:00:00.207) 0:00:12.308 ********** 2026-03-29 02:42:11.768461 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:11.768468 | orchestrator | 2026-03-29 02:42:11.768475 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 02:42:11.768480 | orchestrator | 2026-03-29 02:42:11.768485 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:42:11.768489 | orchestrator | Sunday 29 March 2026 02:42:11 +0000 (0:00:01.662) 0:00:13.971 ********** 2026-03-29 02:42:11.768494 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:11.768498 | orchestrator | 2026-03-29 02:42:11.768502 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:42:11.768506 | orchestrator | Sunday 29 March 2026 02:42:11 +0000 (0:00:00.260) 0:00:14.231 ********** 2026-03-29 02:42:11.768511 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:42:11.768515 | orchestrator | 2026-03-29 02:42:11.768523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.009913 | orchestrator | Sunday 29 March 2026 02:42:11 +0000 (0:00:00.258) 0:00:14.490 ********** 2026-03-29 02:42:20.010084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 02:42:20.010102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 02:42:20.010112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 02:42:20.010136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 02:42:20.010145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 02:42:20.010154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 02:42:20.010163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 02:42:20.010172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 02:42:20.010181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 02:42:20.010190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 02:42:20.010198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 02:42:20.010207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 02:42:20.010216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 02:42:20.010225 | orchestrator | 2026-03-29 02:42:20.010235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010244 | orchestrator | Sunday 29 March 2026 02:42:12 +0000 (0:00:00.583) 0:00:15.073 ********** 2026-03-29 02:42:20.010253 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010263 | orchestrator | 2026-03-29 02:42:20.010272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010280 | orchestrator | Sunday 29 March 2026 02:42:12 +0000 (0:00:00.213) 0:00:15.286 ********** 2026-03-29 02:42:20.010289 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010297 | orchestrator | 2026-03-29 02:42:20.010306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010315 | orchestrator | Sunday 29 March 2026 02:42:12 +0000 (0:00:00.233) 0:00:15.520 ********** 2026-03-29 02:42:20.010345 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010354 | orchestrator | 2026-03-29 02:42:20.010363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010372 | orchestrator | Sunday 29 March 2026 02:42:13 +0000 (0:00:00.232) 0:00:15.752 ********** 2026-03-29 02:42:20.010380 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010389 | orchestrator | 2026-03-29 02:42:20.010397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010406 | orchestrator | Sunday 29 March 2026 02:42:13 +0000 (0:00:00.231) 0:00:15.984 ********** 2026-03-29 02:42:20.010414 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010423 | orchestrator | 2026-03-29 02:42:20.010432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010440 | orchestrator | Sunday 29 March 2026 02:42:13 +0000 (0:00:00.220) 0:00:16.204 ********** 2026-03-29 02:42:20.010449 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010458 | orchestrator | 2026-03-29 02:42:20.010468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010479 | orchestrator | Sunday 29 March 2026 02:42:13 +0000 (0:00:00.202) 0:00:16.407 ********** 2026-03-29 02:42:20.010489 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010499 | orchestrator | 2026-03-29 02:42:20.010509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010519 | orchestrator | Sunday 29 March 2026 02:42:13 +0000 (0:00:00.219) 0:00:16.626 ********** 2026-03-29 02:42:20.010529 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.010539 | orchestrator | 2026-03-29 02:42:20.010549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010559 | orchestrator | Sunday 29 March 2026 02:42:14 +0000 (0:00:00.254) 0:00:16.880 ********** 2026-03-29 02:42:20.010569 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb) 2026-03-29 02:42:20.010581 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb) 2026-03-29 02:42:20.010591 | orchestrator | 2026-03-29 02:42:20.010602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010612 | orchestrator | Sunday 29 March 2026 02:42:14 +0000 (0:00:00.612) 0:00:17.493 ********** 2026-03-29 02:42:20.010622 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0) 2026-03-29 02:42:20.010633 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0) 2026-03-29 02:42:20.010643 | orchestrator | 2026-03-29 02:42:20.010653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010663 | orchestrator | Sunday 29 March 2026 02:42:15 +0000 (0:00:00.590) 0:00:18.083 ********** 2026-03-29 02:42:20.010674 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62) 2026-03-29 02:42:20.010702 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62) 2026-03-29 02:42:20.010712 | orchestrator | 2026-03-29 02:42:20.010722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010748 | orchestrator | Sunday 29 March 2026 02:42:16 +0000 (0:00:00.756) 0:00:18.839 ********** 2026-03-29 02:42:20.010759 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a) 2026-03-29 02:42:20.010769 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a) 2026-03-29 02:42:20.010779 | orchestrator | 2026-03-29 02:42:20.010790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:20.010804 | orchestrator | Sunday 29 March 2026 02:42:16 +0000 (0:00:00.410) 0:00:19.250 ********** 2026-03-29 02:42:20.010815 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:42:20.010834 | orchestrator | 2026-03-29 02:42:20.010849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.010864 | orchestrator | Sunday 29 March 2026 02:42:16 +0000 (0:00:00.347) 0:00:19.598 ********** 2026-03-29 02:42:20.010881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 02:42:20.010903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 02:42:20.010916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 02:42:20.010930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 02:42:20.010943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 02:42:20.010957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 02:42:20.010971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 02:42:20.010986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 02:42:20.011001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 02:42:20.011015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 02:42:20.011031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 02:42:20.011044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 02:42:20.011059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 02:42:20.011074 | orchestrator | 2026-03-29 02:42:20.011089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011104 | orchestrator | Sunday 29 March 2026 02:42:17 +0000 (0:00:00.373) 0:00:19.971 ********** 2026-03-29 02:42:20.011119 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011133 | orchestrator | 2026-03-29 02:42:20.011147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011162 | orchestrator | Sunday 29 March 2026 02:42:17 +0000 (0:00:00.200) 0:00:20.172 ********** 2026-03-29 02:42:20.011176 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011192 | orchestrator | 2026-03-29 02:42:20.011207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011222 | orchestrator | Sunday 29 March 2026 02:42:17 +0000 (0:00:00.208) 0:00:20.380 ********** 2026-03-29 02:42:20.011236 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011252 | orchestrator | 2026-03-29 02:42:20.011267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011283 | orchestrator | Sunday 29 March 2026 02:42:17 +0000 (0:00:00.189) 0:00:20.570 ********** 2026-03-29 02:42:20.011299 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011310 | orchestrator | 2026-03-29 02:42:20.011319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011328 | orchestrator | Sunday 29 March 2026 02:42:18 +0000 (0:00:00.187) 0:00:20.757 ********** 2026-03-29 02:42:20.011337 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011345 | orchestrator | 2026-03-29 02:42:20.011354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011363 | orchestrator | Sunday 29 March 2026 02:42:18 +0000 (0:00:00.191) 0:00:20.948 ********** 2026-03-29 02:42:20.011387 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011405 | orchestrator | 2026-03-29 02:42:20.011414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011423 | orchestrator | Sunday 29 March 2026 02:42:18 +0000 (0:00:00.207) 0:00:21.156 ********** 2026-03-29 02:42:20.011432 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011450 | orchestrator | 2026-03-29 02:42:20.011459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011468 | orchestrator | Sunday 29 March 2026 02:42:18 +0000 (0:00:00.186) 0:00:21.343 ********** 2026-03-29 02:42:20.011477 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:20.011486 | orchestrator | 2026-03-29 02:42:20.011494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011503 | orchestrator | Sunday 29 March 2026 02:42:19 +0000 (0:00:00.529) 0:00:21.872 ********** 2026-03-29 02:42:20.011512 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 02:42:20.011529 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 02:42:20.011544 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 02:42:20.011560 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 02:42:20.011574 | orchestrator | 2026-03-29 02:42:20.011588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:20.011602 | orchestrator | Sunday 29 March 2026 02:42:19 +0000 (0:00:00.651) 0:00:22.524 ********** 2026-03-29 02:42:20.011617 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.881810 | orchestrator | 2026-03-29 02:42:26.881919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:26.881928 | orchestrator | Sunday 29 March 2026 02:42:20 +0000 (0:00:00.208) 0:00:22.732 ********** 2026-03-29 02:42:26.881934 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.881941 | orchestrator | 2026-03-29 02:42:26.881945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:26.881951 | orchestrator | Sunday 29 March 2026 02:42:20 +0000 (0:00:00.216) 0:00:22.949 ********** 2026-03-29 02:42:26.881966 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.881971 | orchestrator | 2026-03-29 02:42:26.881976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:26.881981 | orchestrator | Sunday 29 March 2026 02:42:20 +0000 (0:00:00.227) 0:00:23.176 ********** 2026-03-29 02:42:26.881985 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.881990 | orchestrator | 2026-03-29 02:42:26.881994 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 02:42:26.881999 | orchestrator | Sunday 29 March 2026 02:42:20 +0000 (0:00:00.214) 0:00:23.391 ********** 2026-03-29 02:42:26.882003 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-29 02:42:26.882009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-29 02:42:26.882046 | orchestrator | 2026-03-29 02:42:26.882052 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 02:42:26.882057 | orchestrator | Sunday 29 March 2026 02:42:20 +0000 (0:00:00.210) 0:00:23.601 ********** 2026-03-29 02:42:26.882061 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882066 | orchestrator | 2026-03-29 02:42:26.882070 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 02:42:26.882075 | orchestrator | Sunday 29 March 2026 02:42:21 +0000 (0:00:00.149) 0:00:23.750 ********** 2026-03-29 02:42:26.882080 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882084 | orchestrator | 2026-03-29 02:42:26.882089 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 02:42:26.882093 | orchestrator | Sunday 29 March 2026 02:42:21 +0000 (0:00:00.155) 0:00:23.905 ********** 2026-03-29 02:42:26.882098 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882102 | orchestrator | 2026-03-29 02:42:26.882106 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 02:42:26.882111 | orchestrator | Sunday 29 March 2026 02:42:21 +0000 (0:00:00.152) 0:00:24.058 ********** 2026-03-29 02:42:26.882116 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:42:26.882121 | orchestrator | 2026-03-29 02:42:26.882126 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 02:42:26.882130 | orchestrator | Sunday 29 March 2026 02:42:21 +0000 (0:00:00.151) 0:00:24.209 ********** 2026-03-29 02:42:26.882149 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df205cf6-8b40-53f0-aec9-c93c6a681056'}}) 2026-03-29 02:42:26.882155 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}}) 2026-03-29 02:42:26.882160 | orchestrator | 2026-03-29 02:42:26.882165 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 02:42:26.882169 | orchestrator | Sunday 29 March 2026 02:42:21 +0000 (0:00:00.173) 0:00:24.383 ********** 2026-03-29 02:42:26.882174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df205cf6-8b40-53f0-aec9-c93c6a681056'}})  2026-03-29 02:42:26.882181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}})  2026-03-29 02:42:26.882185 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882190 | orchestrator | 2026-03-29 02:42:26.882194 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 02:42:26.882199 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.467) 0:00:24.850 ********** 2026-03-29 02:42:26.882203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df205cf6-8b40-53f0-aec9-c93c6a681056'}})  2026-03-29 02:42:26.882208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}})  2026-03-29 02:42:26.882212 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882217 | orchestrator | 2026-03-29 02:42:26.882221 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 02:42:26.882226 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.176) 0:00:25.027 ********** 2026-03-29 02:42:26.882230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df205cf6-8b40-53f0-aec9-c93c6a681056'}})  2026-03-29 02:42:26.882235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}})  2026-03-29 02:42:26.882240 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882244 | orchestrator | 2026-03-29 02:42:26.882249 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 02:42:26.882253 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.171) 0:00:25.198 ********** 2026-03-29 02:42:26.882258 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:42:26.882262 | orchestrator | 2026-03-29 02:42:26.882267 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 02:42:26.882271 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.162) 0:00:25.360 ********** 2026-03-29 02:42:26.882276 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:42:26.882280 | orchestrator | 2026-03-29 02:42:26.882285 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 02:42:26.882289 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.157) 0:00:25.518 ********** 2026-03-29 02:42:26.882305 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882310 | orchestrator | 2026-03-29 02:42:26.882314 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 02:42:26.882319 | orchestrator | Sunday 29 March 2026 02:42:22 +0000 (0:00:00.149) 0:00:25.667 ********** 2026-03-29 02:42:26.882323 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882328 | orchestrator | 2026-03-29 02:42:26.882332 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 02:42:26.882337 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.160) 0:00:25.827 ********** 2026-03-29 02:42:26.882345 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882349 | orchestrator | 2026-03-29 02:42:26.882354 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 02:42:26.882358 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.167) 0:00:25.994 ********** 2026-03-29 02:42:26.882367 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:42:26.882372 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:26.882376 | orchestrator |  "sdb": { 2026-03-29 02:42:26.882382 | orchestrator |  "osd_lvm_uuid": "df205cf6-8b40-53f0-aec9-c93c6a681056" 2026-03-29 02:42:26.882386 | orchestrator |  }, 2026-03-29 02:42:26.882391 | orchestrator |  "sdc": { 2026-03-29 02:42:26.882396 | orchestrator |  "osd_lvm_uuid": "eec6ab8e-cb01-5d55-a04b-fe63d54a2948" 2026-03-29 02:42:26.882400 | orchestrator |  } 2026-03-29 02:42:26.882405 | orchestrator |  } 2026-03-29 02:42:26.882409 | orchestrator | } 2026-03-29 02:42:26.882414 | orchestrator | 2026-03-29 02:42:26.882419 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 02:42:26.882423 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.177) 0:00:26.172 ********** 2026-03-29 02:42:26.882428 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882432 | orchestrator | 2026-03-29 02:42:26.882437 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 02:42:26.882441 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.134) 0:00:26.306 ********** 2026-03-29 02:42:26.882446 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882450 | orchestrator | 2026-03-29 02:42:26.882455 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 02:42:26.882459 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.136) 0:00:26.443 ********** 2026-03-29 02:42:26.882464 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:42:26.882468 | orchestrator | 2026-03-29 02:42:26.882473 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 02:42:26.882477 | orchestrator | Sunday 29 March 2026 02:42:23 +0000 (0:00:00.148) 0:00:26.592 ********** 2026-03-29 02:42:26.882482 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 02:42:26.882486 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 02:42:26.882491 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:26.882495 | orchestrator |  "sdb": { 2026-03-29 02:42:26.882500 | orchestrator |  "osd_lvm_uuid": "df205cf6-8b40-53f0-aec9-c93c6a681056" 2026-03-29 02:42:26.882505 | orchestrator |  }, 2026-03-29 02:42:26.882509 | orchestrator |  "sdc": { 2026-03-29 02:42:26.882514 | orchestrator |  "osd_lvm_uuid": "eec6ab8e-cb01-5d55-a04b-fe63d54a2948" 2026-03-29 02:42:26.882518 | orchestrator |  } 2026-03-29 02:42:26.882523 | orchestrator |  }, 2026-03-29 02:42:26.882527 | orchestrator |  "lvm_volumes": [ 2026-03-29 02:42:26.882532 | orchestrator |  { 2026-03-29 02:42:26.882536 | orchestrator |  "data": "osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056", 2026-03-29 02:42:26.882541 | orchestrator |  "data_vg": "ceph-df205cf6-8b40-53f0-aec9-c93c6a681056" 2026-03-29 02:42:26.882545 | orchestrator |  }, 2026-03-29 02:42:26.882550 | orchestrator |  { 2026-03-29 02:42:26.882554 | orchestrator |  "data": "osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948", 2026-03-29 02:42:26.882559 | orchestrator |  "data_vg": "ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948" 2026-03-29 02:42:26.882563 | orchestrator |  } 2026-03-29 02:42:26.882568 | orchestrator |  ] 2026-03-29 02:42:26.882572 | orchestrator |  } 2026-03-29 02:42:26.882577 | orchestrator | } 2026-03-29 02:42:26.882582 | orchestrator | 2026-03-29 02:42:26.882586 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 02:42:26.882591 | orchestrator | Sunday 29 March 2026 02:42:24 +0000 (0:00:00.560) 0:00:27.153 ********** 2026-03-29 02:42:26.882595 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:26.882600 | orchestrator | 2026-03-29 02:42:26.882604 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 02:42:26.882609 | orchestrator | 2026-03-29 02:42:26.882613 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:42:26.882618 | orchestrator | Sunday 29 March 2026 02:42:25 +0000 (0:00:01.410) 0:00:28.563 ********** 2026-03-29 02:42:26.882626 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:26.882630 | orchestrator | 2026-03-29 02:42:26.882635 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:42:26.882640 | orchestrator | Sunday 29 March 2026 02:42:26 +0000 (0:00:00.276) 0:00:28.840 ********** 2026-03-29 02:42:26.882644 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:42:26.882649 | orchestrator | 2026-03-29 02:42:26.882653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:26.882658 | orchestrator | Sunday 29 March 2026 02:42:26 +0000 (0:00:00.253) 0:00:29.094 ********** 2026-03-29 02:42:26.882662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 02:42:26.882667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 02:42:26.882671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 02:42:26.882676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 02:42:26.882680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 02:42:26.882710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 02:42:35.971379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 02:42:35.971505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 02:42:35.971519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 02:42:35.971529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 02:42:35.971553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 02:42:35.971571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 02:42:35.971581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 02:42:35.971591 | orchestrator | 2026-03-29 02:42:35.971604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971620 | orchestrator | Sunday 29 March 2026 02:42:26 +0000 (0:00:00.503) 0:00:29.598 ********** 2026-03-29 02:42:35.971634 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.971654 | orchestrator | 2026-03-29 02:42:35.971676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971691 | orchestrator | Sunday 29 March 2026 02:42:27 +0000 (0:00:00.273) 0:00:29.872 ********** 2026-03-29 02:42:35.971762 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.971777 | orchestrator | 2026-03-29 02:42:35.971791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971804 | orchestrator | Sunday 29 March 2026 02:42:27 +0000 (0:00:00.245) 0:00:30.117 ********** 2026-03-29 02:42:35.971817 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.971830 | orchestrator | 2026-03-29 02:42:35.971844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971858 | orchestrator | Sunday 29 March 2026 02:42:27 +0000 (0:00:00.204) 0:00:30.321 ********** 2026-03-29 02:42:35.971871 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.971885 | orchestrator | 2026-03-29 02:42:35.971899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971912 | orchestrator | Sunday 29 March 2026 02:42:28 +0000 (0:00:00.744) 0:00:31.065 ********** 2026-03-29 02:42:35.971925 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.971940 | orchestrator | 2026-03-29 02:42:35.971953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.971966 | orchestrator | Sunday 29 March 2026 02:42:28 +0000 (0:00:00.222) 0:00:31.288 ********** 2026-03-29 02:42:35.972010 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.972025 | orchestrator | 2026-03-29 02:42:35.972039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972053 | orchestrator | Sunday 29 March 2026 02:42:28 +0000 (0:00:00.266) 0:00:31.555 ********** 2026-03-29 02:42:35.972067 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.972082 | orchestrator | 2026-03-29 02:42:35.972096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972110 | orchestrator | Sunday 29 March 2026 02:42:29 +0000 (0:00:00.254) 0:00:31.809 ********** 2026-03-29 02:42:35.972124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.972138 | orchestrator | 2026-03-29 02:42:35.972152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972166 | orchestrator | Sunday 29 March 2026 02:42:29 +0000 (0:00:00.224) 0:00:32.034 ********** 2026-03-29 02:42:35.972180 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6) 2026-03-29 02:42:35.972196 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6) 2026-03-29 02:42:35.972210 | orchestrator | 2026-03-29 02:42:35.972224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972238 | orchestrator | Sunday 29 March 2026 02:42:29 +0000 (0:00:00.454) 0:00:32.488 ********** 2026-03-29 02:42:35.972254 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735) 2026-03-29 02:42:35.972269 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735) 2026-03-29 02:42:35.972285 | orchestrator | 2026-03-29 02:42:35.972300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972314 | orchestrator | Sunday 29 March 2026 02:42:30 +0000 (0:00:00.475) 0:00:32.964 ********** 2026-03-29 02:42:35.972329 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa) 2026-03-29 02:42:35.972342 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa) 2026-03-29 02:42:35.972352 | orchestrator | 2026-03-29 02:42:35.972368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972383 | orchestrator | Sunday 29 March 2026 02:42:30 +0000 (0:00:00.474) 0:00:33.439 ********** 2026-03-29 02:42:35.972399 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b) 2026-03-29 02:42:35.972413 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b) 2026-03-29 02:42:35.972427 | orchestrator | 2026-03-29 02:42:35.972440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:42:35.972455 | orchestrator | Sunday 29 March 2026 02:42:31 +0000 (0:00:00.453) 0:00:33.893 ********** 2026-03-29 02:42:35.972469 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:42:35.972485 | orchestrator | 2026-03-29 02:42:35.972500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.972541 | orchestrator | Sunday 29 March 2026 02:42:31 +0000 (0:00:00.346) 0:00:34.239 ********** 2026-03-29 02:42:35.972557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 02:42:35.972570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 02:42:35.972583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 02:42:35.972608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 02:42:35.972623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 02:42:35.972637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 02:42:35.972665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 02:42:35.972679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 02:42:35.972761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 02:42:35.972785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 02:42:35.972798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 02:42:35.972812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 02:42:35.972826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 02:42:35.972839 | orchestrator | 2026-03-29 02:42:35.972854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.972868 | orchestrator | Sunday 29 March 2026 02:42:32 +0000 (0:00:00.628) 0:00:34.868 ********** 2026-03-29 02:42:35.972881 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.972894 | orchestrator | 2026-03-29 02:42:35.972908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.972921 | orchestrator | Sunday 29 March 2026 02:42:32 +0000 (0:00:00.230) 0:00:35.099 ********** 2026-03-29 02:42:35.972934 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.972948 | orchestrator | 2026-03-29 02:42:35.972962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.972977 | orchestrator | Sunday 29 March 2026 02:42:32 +0000 (0:00:00.213) 0:00:35.312 ********** 2026-03-29 02:42:35.972990 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973005 | orchestrator | 2026-03-29 02:42:35.973021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973036 | orchestrator | Sunday 29 March 2026 02:42:32 +0000 (0:00:00.215) 0:00:35.527 ********** 2026-03-29 02:42:35.973051 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973066 | orchestrator | 2026-03-29 02:42:35.973081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973095 | orchestrator | Sunday 29 March 2026 02:42:33 +0000 (0:00:00.222) 0:00:35.750 ********** 2026-03-29 02:42:35.973110 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973128 | orchestrator | 2026-03-29 02:42:35.973151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973165 | orchestrator | Sunday 29 March 2026 02:42:33 +0000 (0:00:00.216) 0:00:35.967 ********** 2026-03-29 02:42:35.973179 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973193 | orchestrator | 2026-03-29 02:42:35.973206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973221 | orchestrator | Sunday 29 March 2026 02:42:33 +0000 (0:00:00.226) 0:00:36.193 ********** 2026-03-29 02:42:35.973235 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973250 | orchestrator | 2026-03-29 02:42:35.973265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973276 | orchestrator | Sunday 29 March 2026 02:42:33 +0000 (0:00:00.219) 0:00:36.413 ********** 2026-03-29 02:42:35.973285 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973300 | orchestrator | 2026-03-29 02:42:35.973324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973340 | orchestrator | Sunday 29 March 2026 02:42:33 +0000 (0:00:00.229) 0:00:36.643 ********** 2026-03-29 02:42:35.973354 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 02:42:35.973368 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 02:42:35.973383 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 02:42:35.973397 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 02:42:35.973412 | orchestrator | 2026-03-29 02:42:35.973426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973455 | orchestrator | Sunday 29 March 2026 02:42:34 +0000 (0:00:00.868) 0:00:37.511 ********** 2026-03-29 02:42:35.973468 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973481 | orchestrator | 2026-03-29 02:42:35.973494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973508 | orchestrator | Sunday 29 March 2026 02:42:34 +0000 (0:00:00.211) 0:00:37.723 ********** 2026-03-29 02:42:35.973522 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973536 | orchestrator | 2026-03-29 02:42:35.973550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973564 | orchestrator | Sunday 29 March 2026 02:42:35 +0000 (0:00:00.217) 0:00:37.941 ********** 2026-03-29 02:42:35.973578 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973592 | orchestrator | 2026-03-29 02:42:35.973605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:42:35.973619 | orchestrator | Sunday 29 March 2026 02:42:35 +0000 (0:00:00.551) 0:00:38.492 ********** 2026-03-29 02:42:35.973634 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:35.973647 | orchestrator | 2026-03-29 02:42:35.973680 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 02:42:39.827568 | orchestrator | Sunday 29 March 2026 02:42:35 +0000 (0:00:00.196) 0:00:38.689 ********** 2026-03-29 02:42:39.827667 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-29 02:42:39.827683 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-29 02:42:39.827694 | orchestrator | 2026-03-29 02:42:39.827759 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 02:42:39.827788 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.159) 0:00:38.848 ********** 2026-03-29 02:42:39.827799 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.827810 | orchestrator | 2026-03-29 02:42:39.827821 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 02:42:39.827831 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.146) 0:00:38.995 ********** 2026-03-29 02:42:39.827841 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.827852 | orchestrator | 2026-03-29 02:42:39.827861 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 02:42:39.827871 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.136) 0:00:39.131 ********** 2026-03-29 02:42:39.827881 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.827890 | orchestrator | 2026-03-29 02:42:39.827900 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 02:42:39.827910 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.116) 0:00:39.248 ********** 2026-03-29 02:42:39.827920 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:42:39.827931 | orchestrator | 2026-03-29 02:42:39.827942 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 02:42:39.827952 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.136) 0:00:39.384 ********** 2026-03-29 02:42:39.827963 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}}) 2026-03-29 02:42:39.827974 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}}) 2026-03-29 02:42:39.827984 | orchestrator | 2026-03-29 02:42:39.827994 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 02:42:39.828003 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.150) 0:00:39.535 ********** 2026-03-29 02:42:39.828014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}})  2026-03-29 02:42:39.828025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}})  2026-03-29 02:42:39.828035 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828066 | orchestrator | 2026-03-29 02:42:39.828077 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 02:42:39.828087 | orchestrator | Sunday 29 March 2026 02:42:36 +0000 (0:00:00.143) 0:00:39.678 ********** 2026-03-29 02:42:39.828097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}})  2026-03-29 02:42:39.828107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}})  2026-03-29 02:42:39.828117 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828125 | orchestrator | 2026-03-29 02:42:39.828135 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 02:42:39.828145 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.138) 0:00:39.817 ********** 2026-03-29 02:42:39.828154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}})  2026-03-29 02:42:39.828163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}})  2026-03-29 02:42:39.828174 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828185 | orchestrator | 2026-03-29 02:42:39.828195 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 02:42:39.828206 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.170) 0:00:39.987 ********** 2026-03-29 02:42:39.828216 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:42:39.828227 | orchestrator | 2026-03-29 02:42:39.828237 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 02:42:39.828248 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.145) 0:00:40.133 ********** 2026-03-29 02:42:39.828258 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:42:39.828269 | orchestrator | 2026-03-29 02:42:39.828279 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 02:42:39.828290 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.273) 0:00:40.407 ********** 2026-03-29 02:42:39.828300 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828311 | orchestrator | 2026-03-29 02:42:39.828322 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 02:42:39.828332 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.127) 0:00:40.535 ********** 2026-03-29 02:42:39.828343 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828354 | orchestrator | 2026-03-29 02:42:39.828364 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 02:42:39.828374 | orchestrator | Sunday 29 March 2026 02:42:37 +0000 (0:00:00.142) 0:00:40.677 ********** 2026-03-29 02:42:39.828385 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828395 | orchestrator | 2026-03-29 02:42:39.828405 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 02:42:39.828415 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.139) 0:00:40.816 ********** 2026-03-29 02:42:39.828425 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:42:39.828435 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:39.828446 | orchestrator |  "sdb": { 2026-03-29 02:42:39.828476 | orchestrator |  "osd_lvm_uuid": "0734d53c-ec7b-5877-b2ad-f9abf7f5e844" 2026-03-29 02:42:39.828487 | orchestrator |  }, 2026-03-29 02:42:39.828499 | orchestrator |  "sdc": { 2026-03-29 02:42:39.828510 | orchestrator |  "osd_lvm_uuid": "4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33" 2026-03-29 02:42:39.828520 | orchestrator |  } 2026-03-29 02:42:39.828531 | orchestrator |  } 2026-03-29 02:42:39.828539 | orchestrator | } 2026-03-29 02:42:39.828546 | orchestrator | 2026-03-29 02:42:39.828552 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 02:42:39.828565 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.150) 0:00:40.967 ********** 2026-03-29 02:42:39.828572 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828586 | orchestrator | 2026-03-29 02:42:39.828593 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 02:42:39.828599 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.138) 0:00:41.105 ********** 2026-03-29 02:42:39.828605 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828611 | orchestrator | 2026-03-29 02:42:39.828617 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 02:42:39.828623 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.131) 0:00:41.237 ********** 2026-03-29 02:42:39.828629 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:42:39.828635 | orchestrator | 2026-03-29 02:42:39.828641 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 02:42:39.828647 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.138) 0:00:41.375 ********** 2026-03-29 02:42:39.828653 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 02:42:39.828660 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 02:42:39.828666 | orchestrator |  "ceph_osd_devices": { 2026-03-29 02:42:39.828672 | orchestrator |  "sdb": { 2026-03-29 02:42:39.828678 | orchestrator |  "osd_lvm_uuid": "0734d53c-ec7b-5877-b2ad-f9abf7f5e844" 2026-03-29 02:42:39.828685 | orchestrator |  }, 2026-03-29 02:42:39.828691 | orchestrator |  "sdc": { 2026-03-29 02:42:39.828721 | orchestrator |  "osd_lvm_uuid": "4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33" 2026-03-29 02:42:39.828733 | orchestrator |  } 2026-03-29 02:42:39.828743 | orchestrator |  }, 2026-03-29 02:42:39.828749 | orchestrator |  "lvm_volumes": [ 2026-03-29 02:42:39.828755 | orchestrator |  { 2026-03-29 02:42:39.828762 | orchestrator |  "data": "osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844", 2026-03-29 02:42:39.828768 | orchestrator |  "data_vg": "ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844" 2026-03-29 02:42:39.828774 | orchestrator |  }, 2026-03-29 02:42:39.828780 | orchestrator |  { 2026-03-29 02:42:39.828786 | orchestrator |  "data": "osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33", 2026-03-29 02:42:39.828792 | orchestrator |  "data_vg": "ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33" 2026-03-29 02:42:39.828798 | orchestrator |  } 2026-03-29 02:42:39.828805 | orchestrator |  ] 2026-03-29 02:42:39.828811 | orchestrator |  } 2026-03-29 02:42:39.828817 | orchestrator | } 2026-03-29 02:42:39.828823 | orchestrator | 2026-03-29 02:42:39.828829 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 02:42:39.828836 | orchestrator | Sunday 29 March 2026 02:42:38 +0000 (0:00:00.211) 0:00:41.587 ********** 2026-03-29 02:42:39.828842 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 02:42:39.828848 | orchestrator | 2026-03-29 02:42:39.828854 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:42:39.828861 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 02:42:39.828868 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 02:42:39.828874 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 02:42:39.828881 | orchestrator | 2026-03-29 02:42:39.828887 | orchestrator | 2026-03-29 02:42:39.828893 | orchestrator | 2026-03-29 02:42:39.828899 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:42:39.828905 | orchestrator | Sunday 29 March 2026 02:42:39 +0000 (0:00:00.951) 0:00:42.539 ********** 2026-03-29 02:42:39.828911 | orchestrator | =============================================================================== 2026-03-29 02:42:39.828917 | orchestrator | Write configuration file ------------------------------------------------ 4.02s 2026-03-29 02:42:39.828923 | orchestrator | Add known links to the list of available block devices ------------------ 1.52s 2026-03-29 02:42:39.828934 | orchestrator | Add known partitions to the list of available block devices ------------- 1.35s 2026-03-29 02:42:39.828941 | orchestrator | Print configuration data ------------------------------------------------ 0.98s 2026-03-29 02:42:39.828947 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-29 02:42:39.828953 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-03-29 02:42:39.828959 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.77s 2026-03-29 02:42:39.828965 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-03-29 02:42:39.828971 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-03-29 02:42:39.828977 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-29 02:42:39.828983 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-03-29 02:42:39.828989 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-03-29 02:42:39.828996 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.63s 2026-03-29 02:42:39.829007 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-03-29 02:42:40.099293 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-29 02:42:40.099403 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-29 02:42:40.099416 | orchestrator | Set OSD devices config data --------------------------------------------- 0.59s 2026-03-29 02:42:40.099440 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-03-29 02:42:40.099448 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.58s 2026-03-29 02:42:40.099455 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-03-29 02:43:02.279581 | orchestrator | 2026-03-29 02:43:02 | INFO  | Task 4be4da86-e747-476f-a4d3-7815f67ef370 (sync inventory) is running in background. Output coming soon. 2026-03-29 02:43:28.808048 | orchestrator | 2026-03-29 02:43:03 | INFO  | Starting group_vars file reorganization 2026-03-29 02:43:28.808173 | orchestrator | 2026-03-29 02:43:03 | INFO  | Moved 0 file(s) to their respective directories 2026-03-29 02:43:28.808194 | orchestrator | 2026-03-29 02:43:03 | INFO  | Group_vars file reorganization completed 2026-03-29 02:43:28.808210 | orchestrator | 2026-03-29 02:43:06 | INFO  | Starting variable preparation from inventory 2026-03-29 02:43:28.808226 | orchestrator | 2026-03-29 02:43:09 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-29 02:43:28.808242 | orchestrator | 2026-03-29 02:43:09 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-29 02:43:28.808257 | orchestrator | 2026-03-29 02:43:09 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-29 02:43:28.808272 | orchestrator | 2026-03-29 02:43:09 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-29 02:43:28.808287 | orchestrator | 2026-03-29 02:43:09 | INFO  | Variable preparation completed 2026-03-29 02:43:28.808302 | orchestrator | 2026-03-29 02:43:10 | INFO  | Starting inventory overwrite handling 2026-03-29 02:43:28.808317 | orchestrator | 2026-03-29 02:43:10 | INFO  | Handling group overwrites in 99-overwrite 2026-03-29 02:43:28.808332 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removing group frr:children from 60-generic 2026-03-29 02:43:28.808348 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-29 02:43:28.808363 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-29 02:43:28.808410 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-29 02:43:28.808427 | orchestrator | 2026-03-29 02:43:10 | INFO  | Handling group overwrites in 20-roles 2026-03-29 02:43:28.808441 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-29 02:43:28.808456 | orchestrator | 2026-03-29 02:43:10 | INFO  | Removed 5 group(s) in total 2026-03-29 02:43:28.808471 | orchestrator | 2026-03-29 02:43:10 | INFO  | Inventory overwrite handling completed 2026-03-29 02:43:28.808486 | orchestrator | 2026-03-29 02:43:12 | INFO  | Starting merge of inventory files 2026-03-29 02:43:28.808500 | orchestrator | 2026-03-29 02:43:12 | INFO  | Inventory files merged successfully 2026-03-29 02:43:28.808514 | orchestrator | 2026-03-29 02:43:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-29 02:43:28.808528 | orchestrator | 2026-03-29 02:43:27 | INFO  | Successfully wrote ClusterShell configuration 2026-03-29 02:43:28.808543 | orchestrator | [master d3e5608] 2026-03-29-02-43 2026-03-29 02:43:28.808559 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-29 02:43:30.894312 | orchestrator | 2026-03-29 02:43:30 | INFO  | Task 5f8bf68f-6cd2-4f45-9d40-60236bcdc482 (ceph-create-lvm-devices) was prepared for execution. 2026-03-29 02:43:30.894444 | orchestrator | 2026-03-29 02:43:30 | INFO  | It takes a moment until task 5f8bf68f-6cd2-4f45-9d40-60236bcdc482 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-29 02:43:43.534322 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 02:43:43.534414 | orchestrator | 2.16.14 2026-03-29 02:43:43.534425 | orchestrator | 2026-03-29 02:43:43.534433 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 02:43:43.534441 | orchestrator | 2026-03-29 02:43:43.534448 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:43:43.534455 | orchestrator | Sunday 29 March 2026 02:43:35 +0000 (0:00:00.313) 0:00:00.313 ********** 2026-03-29 02:43:43.534463 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 02:43:43.534469 | orchestrator | 2026-03-29 02:43:43.534476 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:43:43.534483 | orchestrator | Sunday 29 March 2026 02:43:35 +0000 (0:00:00.268) 0:00:00.582 ********** 2026-03-29 02:43:43.534490 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:43.534497 | orchestrator | 2026-03-29 02:43:43.534503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534510 | orchestrator | Sunday 29 March 2026 02:43:35 +0000 (0:00:00.234) 0:00:00.816 ********** 2026-03-29 02:43:43.534517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 02:43:43.534524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 02:43:43.534543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 02:43:43.534550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 02:43:43.534557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 02:43:43.534563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 02:43:43.534570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 02:43:43.534576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 02:43:43.534583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 02:43:43.534590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 02:43:43.534615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 02:43:43.534622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 02:43:43.534629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 02:43:43.534635 | orchestrator | 2026-03-29 02:43:43.534642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534649 | orchestrator | Sunday 29 March 2026 02:43:36 +0000 (0:00:00.551) 0:00:01.367 ********** 2026-03-29 02:43:43.534656 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534662 | orchestrator | 2026-03-29 02:43:43.534669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534676 | orchestrator | Sunday 29 March 2026 02:43:36 +0000 (0:00:00.250) 0:00:01.617 ********** 2026-03-29 02:43:43.534682 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534689 | orchestrator | 2026-03-29 02:43:43.534695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534702 | orchestrator | Sunday 29 March 2026 02:43:36 +0000 (0:00:00.215) 0:00:01.833 ********** 2026-03-29 02:43:43.534708 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534715 | orchestrator | 2026-03-29 02:43:43.534721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534728 | orchestrator | Sunday 29 March 2026 02:43:36 +0000 (0:00:00.207) 0:00:02.041 ********** 2026-03-29 02:43:43.534735 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534741 | orchestrator | 2026-03-29 02:43:43.534791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534800 | orchestrator | Sunday 29 March 2026 02:43:37 +0000 (0:00:00.251) 0:00:02.292 ********** 2026-03-29 02:43:43.534807 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534813 | orchestrator | 2026-03-29 02:43:43.534820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534827 | orchestrator | Sunday 29 March 2026 02:43:37 +0000 (0:00:00.283) 0:00:02.575 ********** 2026-03-29 02:43:43.534834 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534840 | orchestrator | 2026-03-29 02:43:43.534847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534854 | orchestrator | Sunday 29 March 2026 02:43:37 +0000 (0:00:00.207) 0:00:02.783 ********** 2026-03-29 02:43:43.534860 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534866 | orchestrator | 2026-03-29 02:43:43.534873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534880 | orchestrator | Sunday 29 March 2026 02:43:37 +0000 (0:00:00.214) 0:00:02.997 ********** 2026-03-29 02:43:43.534886 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.534893 | orchestrator | 2026-03-29 02:43:43.534900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534906 | orchestrator | Sunday 29 March 2026 02:43:38 +0000 (0:00:00.215) 0:00:03.212 ********** 2026-03-29 02:43:43.534913 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548) 2026-03-29 02:43:43.534921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548) 2026-03-29 02:43:43.534927 | orchestrator | 2026-03-29 02:43:43.534934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534952 | orchestrator | Sunday 29 March 2026 02:43:38 +0000 (0:00:00.737) 0:00:03.949 ********** 2026-03-29 02:43:43.534960 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472) 2026-03-29 02:43:43.534966 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472) 2026-03-29 02:43:43.534973 | orchestrator | 2026-03-29 02:43:43.534980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.534993 | orchestrator | Sunday 29 March 2026 02:43:39 +0000 (0:00:00.726) 0:00:04.676 ********** 2026-03-29 02:43:43.534999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249) 2026-03-29 02:43:43.535006 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249) 2026-03-29 02:43:43.535012 | orchestrator | 2026-03-29 02:43:43.535019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.535026 | orchestrator | Sunday 29 March 2026 02:43:40 +0000 (0:00:00.960) 0:00:05.636 ********** 2026-03-29 02:43:43.535032 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e) 2026-03-29 02:43:43.535039 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e) 2026-03-29 02:43:43.535046 | orchestrator | 2026-03-29 02:43:43.535057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:43:43.535065 | orchestrator | Sunday 29 March 2026 02:43:41 +0000 (0:00:00.467) 0:00:06.103 ********** 2026-03-29 02:43:43.535071 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:43:43.535078 | orchestrator | 2026-03-29 02:43:43.535085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535091 | orchestrator | Sunday 29 March 2026 02:43:41 +0000 (0:00:00.326) 0:00:06.430 ********** 2026-03-29 02:43:43.535098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 02:43:43.535104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 02:43:43.535111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 02:43:43.535117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 02:43:43.535124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 02:43:43.535130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 02:43:43.535137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 02:43:43.535144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 02:43:43.535150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 02:43:43.535157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 02:43:43.535163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 02:43:43.535170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 02:43:43.535176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 02:43:43.535183 | orchestrator | 2026-03-29 02:43:43.535190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535196 | orchestrator | Sunday 29 March 2026 02:43:41 +0000 (0:00:00.411) 0:00:06.841 ********** 2026-03-29 02:43:43.535203 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535209 | orchestrator | 2026-03-29 02:43:43.535216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535222 | orchestrator | Sunday 29 March 2026 02:43:42 +0000 (0:00:00.225) 0:00:07.067 ********** 2026-03-29 02:43:43.535229 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535235 | orchestrator | 2026-03-29 02:43:43.535242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535249 | orchestrator | Sunday 29 March 2026 02:43:42 +0000 (0:00:00.193) 0:00:07.261 ********** 2026-03-29 02:43:43.535255 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535262 | orchestrator | 2026-03-29 02:43:43.535273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535280 | orchestrator | Sunday 29 March 2026 02:43:42 +0000 (0:00:00.209) 0:00:07.470 ********** 2026-03-29 02:43:43.535286 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535293 | orchestrator | 2026-03-29 02:43:43.535299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535306 | orchestrator | Sunday 29 March 2026 02:43:42 +0000 (0:00:00.219) 0:00:07.690 ********** 2026-03-29 02:43:43.535312 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535319 | orchestrator | 2026-03-29 02:43:43.535326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535332 | orchestrator | Sunday 29 March 2026 02:43:42 +0000 (0:00:00.198) 0:00:07.888 ********** 2026-03-29 02:43:43.535339 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535345 | orchestrator | 2026-03-29 02:43:43.535352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:43.535359 | orchestrator | Sunday 29 March 2026 02:43:43 +0000 (0:00:00.479) 0:00:08.368 ********** 2026-03-29 02:43:43.535365 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:43.535372 | orchestrator | 2026-03-29 02:43:43.535382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303420 | orchestrator | Sunday 29 March 2026 02:43:43 +0000 (0:00:00.201) 0:00:08.569 ********** 2026-03-29 02:43:51.303561 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303578 | orchestrator | 2026-03-29 02:43:51.303590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303601 | orchestrator | Sunday 29 March 2026 02:43:43 +0000 (0:00:00.188) 0:00:08.758 ********** 2026-03-29 02:43:51.303610 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 02:43:51.303622 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 02:43:51.303631 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 02:43:51.303640 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 02:43:51.303649 | orchestrator | 2026-03-29 02:43:51.303658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303667 | orchestrator | Sunday 29 March 2026 02:43:44 +0000 (0:00:00.705) 0:00:09.463 ********** 2026-03-29 02:43:51.303676 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303684 | orchestrator | 2026-03-29 02:43:51.303692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303701 | orchestrator | Sunday 29 March 2026 02:43:44 +0000 (0:00:00.205) 0:00:09.668 ********** 2026-03-29 02:43:51.303710 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303719 | orchestrator | 2026-03-29 02:43:51.303728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303814 | orchestrator | Sunday 29 March 2026 02:43:44 +0000 (0:00:00.208) 0:00:09.877 ********** 2026-03-29 02:43:51.303828 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303835 | orchestrator | 2026-03-29 02:43:51.303844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:43:51.303852 | orchestrator | Sunday 29 March 2026 02:43:45 +0000 (0:00:00.214) 0:00:10.092 ********** 2026-03-29 02:43:51.303859 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303867 | orchestrator | 2026-03-29 02:43:51.303875 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 02:43:51.303883 | orchestrator | Sunday 29 March 2026 02:43:45 +0000 (0:00:00.194) 0:00:10.287 ********** 2026-03-29 02:43:51.303891 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.303899 | orchestrator | 2026-03-29 02:43:51.303908 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 02:43:51.303916 | orchestrator | Sunday 29 March 2026 02:43:45 +0000 (0:00:00.142) 0:00:10.429 ********** 2026-03-29 02:43:51.303926 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6a86fe60-1e0e-551e-abcc-872f54df7e3c'}}) 2026-03-29 02:43:51.303965 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '09734191-f9bf-5626-be02-fa226447c12f'}}) 2026-03-29 02:43:51.303973 | orchestrator | 2026-03-29 02:43:51.303982 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 02:43:51.303992 | orchestrator | Sunday 29 March 2026 02:43:45 +0000 (0:00:00.176) 0:00:10.606 ********** 2026-03-29 02:43:51.304003 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 02:43:51.304016 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 02:43:51.304025 | orchestrator | 2026-03-29 02:43:51.304034 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 02:43:51.304044 | orchestrator | Sunday 29 March 2026 02:43:47 +0000 (0:00:02.001) 0:00:12.607 ********** 2026-03-29 02:43:51.304054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304075 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304083 | orchestrator | 2026-03-29 02:43:51.304092 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 02:43:51.304100 | orchestrator | Sunday 29 March 2026 02:43:47 +0000 (0:00:00.314) 0:00:12.922 ********** 2026-03-29 02:43:51.304108 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 02:43:51.304118 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 02:43:51.304126 | orchestrator | 2026-03-29 02:43:51.304135 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 02:43:51.304145 | orchestrator | Sunday 29 March 2026 02:43:49 +0000 (0:00:01.572) 0:00:14.494 ********** 2026-03-29 02:43:51.304155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304174 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304182 | orchestrator | 2026-03-29 02:43:51.304192 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 02:43:51.304201 | orchestrator | Sunday 29 March 2026 02:43:49 +0000 (0:00:00.153) 0:00:14.648 ********** 2026-03-29 02:43:51.304235 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304245 | orchestrator | 2026-03-29 02:43:51.304254 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 02:43:51.304264 | orchestrator | Sunday 29 March 2026 02:43:49 +0000 (0:00:00.132) 0:00:14.781 ********** 2026-03-29 02:43:51.304272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304287 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304295 | orchestrator | 2026-03-29 02:43:51.304303 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 02:43:51.304311 | orchestrator | Sunday 29 March 2026 02:43:49 +0000 (0:00:00.150) 0:00:14.931 ********** 2026-03-29 02:43:51.304329 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304338 | orchestrator | 2026-03-29 02:43:51.304345 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 02:43:51.304353 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.137) 0:00:15.069 ********** 2026-03-29 02:43:51.304370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304387 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304395 | orchestrator | 2026-03-29 02:43:51.304403 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 02:43:51.304411 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.139) 0:00:15.209 ********** 2026-03-29 02:43:51.304419 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304427 | orchestrator | 2026-03-29 02:43:51.304435 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 02:43:51.304443 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.136) 0:00:15.345 ********** 2026-03-29 02:43:51.304450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304466 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304474 | orchestrator | 2026-03-29 02:43:51.304481 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 02:43:51.304489 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.147) 0:00:15.492 ********** 2026-03-29 02:43:51.304497 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:51.304507 | orchestrator | 2026-03-29 02:43:51.304515 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 02:43:51.304523 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.139) 0:00:15.632 ********** 2026-03-29 02:43:51.304531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304548 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304556 | orchestrator | 2026-03-29 02:43:51.304565 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 02:43:51.304574 | orchestrator | Sunday 29 March 2026 02:43:50 +0000 (0:00:00.135) 0:00:15.768 ********** 2026-03-29 02:43:51.304582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304598 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304607 | orchestrator | 2026-03-29 02:43:51.304615 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 02:43:51.304623 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.306) 0:00:16.075 ********** 2026-03-29 02:43:51.304631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:51.304640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:51.304657 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304666 | orchestrator | 2026-03-29 02:43:51.304675 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 02:43:51.304684 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.143) 0:00:16.218 ********** 2026-03-29 02:43:51.304692 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:51.304700 | orchestrator | 2026-03-29 02:43:51.304706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 02:43:51.304721 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.121) 0:00:16.339 ********** 2026-03-29 02:43:57.494954 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.495070 | orchestrator | 2026-03-29 02:43:57.495087 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 02:43:57.495101 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.134) 0:00:16.474 ********** 2026-03-29 02:43:57.495112 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.495124 | orchestrator | 2026-03-29 02:43:57.495136 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 02:43:57.495147 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.134) 0:00:16.608 ********** 2026-03-29 02:43:57.495158 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:43:57.495170 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 02:43:57.495181 | orchestrator | } 2026-03-29 02:43:57.495192 | orchestrator | 2026-03-29 02:43:57.495203 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 02:43:57.495214 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.125) 0:00:16.734 ********** 2026-03-29 02:43:57.495225 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:43:57.495235 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 02:43:57.495246 | orchestrator | } 2026-03-29 02:43:57.495257 | orchestrator | 2026-03-29 02:43:57.495268 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 02:43:57.495295 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.146) 0:00:16.880 ********** 2026-03-29 02:43:57.495306 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:43:57.495318 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 02:43:57.495329 | orchestrator | } 2026-03-29 02:43:57.495339 | orchestrator | 2026-03-29 02:43:57.495351 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 02:43:57.495362 | orchestrator | Sunday 29 March 2026 02:43:51 +0000 (0:00:00.131) 0:00:17.012 ********** 2026-03-29 02:43:57.495373 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:57.495416 | orchestrator | 2026-03-29 02:43:57.495430 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 02:43:57.495443 | orchestrator | Sunday 29 March 2026 02:43:52 +0000 (0:00:00.693) 0:00:17.705 ********** 2026-03-29 02:43:57.495456 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:57.495468 | orchestrator | 2026-03-29 02:43:57.495480 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 02:43:57.495493 | orchestrator | Sunday 29 March 2026 02:43:53 +0000 (0:00:00.532) 0:00:18.238 ********** 2026-03-29 02:43:57.495505 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:57.495519 | orchestrator | 2026-03-29 02:43:57.495532 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 02:43:57.495544 | orchestrator | Sunday 29 March 2026 02:43:53 +0000 (0:00:00.525) 0:00:18.763 ********** 2026-03-29 02:43:57.495557 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:43:57.495572 | orchestrator | 2026-03-29 02:43:57.495592 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 02:43:57.495612 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.302) 0:00:19.066 ********** 2026-03-29 02:43:57.495632 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.495651 | orchestrator | 2026-03-29 02:43:57.495670 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 02:43:57.495721 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.116) 0:00:19.182 ********** 2026-03-29 02:43:57.495742 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.495789 | orchestrator | 2026-03-29 02:43:57.495808 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 02:43:57.495825 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.099) 0:00:19.281 ********** 2026-03-29 02:43:57.495843 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:43:57.495861 | orchestrator |  "vgs_report": { 2026-03-29 02:43:57.495880 | orchestrator |  "vg": [] 2026-03-29 02:43:57.495896 | orchestrator |  } 2026-03-29 02:43:57.495912 | orchestrator | } 2026-03-29 02:43:57.495928 | orchestrator | 2026-03-29 02:43:57.495948 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 02:43:57.495967 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.144) 0:00:19.426 ********** 2026-03-29 02:43:57.495984 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496001 | orchestrator | 2026-03-29 02:43:57.496019 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 02:43:57.496036 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.130) 0:00:19.556 ********** 2026-03-29 02:43:57.496054 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496073 | orchestrator | 2026-03-29 02:43:57.496092 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 02:43:57.496110 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.136) 0:00:19.693 ********** 2026-03-29 02:43:57.496128 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496146 | orchestrator | 2026-03-29 02:43:57.496164 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 02:43:57.496182 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.117) 0:00:19.811 ********** 2026-03-29 02:43:57.496199 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496218 | orchestrator | 2026-03-29 02:43:57.496254 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 02:43:57.496286 | orchestrator | Sunday 29 March 2026 02:43:54 +0000 (0:00:00.138) 0:00:19.950 ********** 2026-03-29 02:43:57.496305 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496323 | orchestrator | 2026-03-29 02:43:57.496342 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 02:43:57.496359 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.134) 0:00:20.084 ********** 2026-03-29 02:43:57.496378 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496396 | orchestrator | 2026-03-29 02:43:57.496415 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 02:43:57.496433 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.134) 0:00:20.219 ********** 2026-03-29 02:43:57.496452 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496471 | orchestrator | 2026-03-29 02:43:57.496490 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 02:43:57.496509 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.142) 0:00:20.361 ********** 2026-03-29 02:43:57.496558 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496577 | orchestrator | 2026-03-29 02:43:57.496596 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 02:43:57.496614 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.299) 0:00:20.660 ********** 2026-03-29 02:43:57.496631 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496648 | orchestrator | 2026-03-29 02:43:57.496666 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 02:43:57.496684 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.133) 0:00:20.794 ********** 2026-03-29 02:43:57.496704 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496721 | orchestrator | 2026-03-29 02:43:57.496739 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 02:43:57.496758 | orchestrator | Sunday 29 March 2026 02:43:55 +0000 (0:00:00.143) 0:00:20.938 ********** 2026-03-29 02:43:57.496826 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496845 | orchestrator | 2026-03-29 02:43:57.496864 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 02:43:57.496952 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.131) 0:00:21.069 ********** 2026-03-29 02:43:57.496975 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.496994 | orchestrator | 2026-03-29 02:43:57.497031 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 02:43:57.497051 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.134) 0:00:21.204 ********** 2026-03-29 02:43:57.497070 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497088 | orchestrator | 2026-03-29 02:43:57.497107 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 02:43:57.497127 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.138) 0:00:21.343 ********** 2026-03-29 02:43:57.497145 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497162 | orchestrator | 2026-03-29 02:43:57.497181 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 02:43:57.497200 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.140) 0:00:21.484 ********** 2026-03-29 02:43:57.497221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:57.497243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:57.497262 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497305 | orchestrator | 2026-03-29 02:43:57.497318 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 02:43:57.497329 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.158) 0:00:21.643 ********** 2026-03-29 02:43:57.497341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:57.497352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:57.497363 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497374 | orchestrator | 2026-03-29 02:43:57.497385 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 02:43:57.497396 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.148) 0:00:21.791 ********** 2026-03-29 02:43:57.497407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:57.497418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:57.497429 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497440 | orchestrator | 2026-03-29 02:43:57.497450 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 02:43:57.497461 | orchestrator | Sunday 29 March 2026 02:43:56 +0000 (0:00:00.151) 0:00:21.942 ********** 2026-03-29 02:43:57.497472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:57.497483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:57.497494 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497504 | orchestrator | 2026-03-29 02:43:57.497518 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 02:43:57.497537 | orchestrator | Sunday 29 March 2026 02:43:57 +0000 (0:00:00.151) 0:00:22.094 ********** 2026-03-29 02:43:57.497581 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:43:57.497601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:43:57.497619 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:43:57.497636 | orchestrator | 2026-03-29 02:43:57.497653 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 02:43:57.497672 | orchestrator | Sunday 29 March 2026 02:43:57 +0000 (0:00:00.291) 0:00:22.386 ********** 2026-03-29 02:43:57.497707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.477818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.477945 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.477957 | orchestrator | 2026-03-29 02:44:02.477966 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 02:44:02.477974 | orchestrator | Sunday 29 March 2026 02:43:57 +0000 (0:00:00.147) 0:00:22.533 ********** 2026-03-29 02:44:02.477981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.477988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.477995 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.478001 | orchestrator | 2026-03-29 02:44:02.478092 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 02:44:02.478110 | orchestrator | Sunday 29 March 2026 02:43:57 +0000 (0:00:00.162) 0:00:22.695 ********** 2026-03-29 02:44:02.478120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.478131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.478142 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.478152 | orchestrator | 2026-03-29 02:44:02.478162 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 02:44:02.478172 | orchestrator | Sunday 29 March 2026 02:43:57 +0000 (0:00:00.188) 0:00:22.884 ********** 2026-03-29 02:44:02.478183 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:44:02.478195 | orchestrator | 2026-03-29 02:44:02.478205 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 02:44:02.478215 | orchestrator | Sunday 29 March 2026 02:43:58 +0000 (0:00:00.534) 0:00:23.419 ********** 2026-03-29 02:44:02.478225 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:44:02.478234 | orchestrator | 2026-03-29 02:44:02.478244 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 02:44:02.478255 | orchestrator | Sunday 29 March 2026 02:43:58 +0000 (0:00:00.552) 0:00:23.971 ********** 2026-03-29 02:44:02.478266 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:44:02.478278 | orchestrator | 2026-03-29 02:44:02.478290 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 02:44:02.478302 | orchestrator | Sunday 29 March 2026 02:43:59 +0000 (0:00:00.150) 0:00:24.122 ********** 2026-03-29 02:44:02.478314 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'vg_name': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 02:44:02.478327 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'vg_name': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 02:44:02.478372 | orchestrator | 2026-03-29 02:44:02.478384 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 02:44:02.478395 | orchestrator | Sunday 29 March 2026 02:43:59 +0000 (0:00:00.180) 0:00:24.303 ********** 2026-03-29 02:44:02.478406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.478416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.478428 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.478439 | orchestrator | 2026-03-29 02:44:02.478451 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 02:44:02.478462 | orchestrator | Sunday 29 March 2026 02:43:59 +0000 (0:00:00.155) 0:00:24.458 ********** 2026-03-29 02:44:02.478473 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.478485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.478496 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.478506 | orchestrator | 2026-03-29 02:44:02.478518 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 02:44:02.478529 | orchestrator | Sunday 29 March 2026 02:43:59 +0000 (0:00:00.143) 0:00:24.602 ********** 2026-03-29 02:44:02.478540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 02:44:02.478551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 02:44:02.478562 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:44:02.478573 | orchestrator | 2026-03-29 02:44:02.478585 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 02:44:02.478595 | orchestrator | Sunday 29 March 2026 02:43:59 +0000 (0:00:00.139) 0:00:24.742 ********** 2026-03-29 02:44:02.478632 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 02:44:02.478643 | orchestrator |  "lvm_report": { 2026-03-29 02:44:02.478651 | orchestrator |  "lv": [ 2026-03-29 02:44:02.478657 | orchestrator |  { 2026-03-29 02:44:02.478664 | orchestrator |  "lv_name": "osd-block-09734191-f9bf-5626-be02-fa226447c12f", 2026-03-29 02:44:02.478671 | orchestrator |  "vg_name": "ceph-09734191-f9bf-5626-be02-fa226447c12f" 2026-03-29 02:44:02.478677 | orchestrator |  }, 2026-03-29 02:44:02.478683 | orchestrator |  { 2026-03-29 02:44:02.478690 | orchestrator |  "lv_name": "osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c", 2026-03-29 02:44:02.478701 | orchestrator |  "vg_name": "ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c" 2026-03-29 02:44:02.478711 | orchestrator |  } 2026-03-29 02:44:02.478721 | orchestrator |  ], 2026-03-29 02:44:02.478731 | orchestrator |  "pv": [ 2026-03-29 02:44:02.478741 | orchestrator |  { 2026-03-29 02:44:02.478751 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 02:44:02.478760 | orchestrator |  "vg_name": "ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c" 2026-03-29 02:44:02.478863 | orchestrator |  }, 2026-03-29 02:44:02.478875 | orchestrator |  { 2026-03-29 02:44:02.478896 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 02:44:02.478907 | orchestrator |  "vg_name": "ceph-09734191-f9bf-5626-be02-fa226447c12f" 2026-03-29 02:44:02.478918 | orchestrator |  } 2026-03-29 02:44:02.478929 | orchestrator |  ] 2026-03-29 02:44:02.478939 | orchestrator |  } 2026-03-29 02:44:02.478948 | orchestrator | } 2026-03-29 02:44:02.478958 | orchestrator | 2026-03-29 02:44:02.478979 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 02:44:02.478988 | orchestrator | 2026-03-29 02:44:02.478998 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:44:02.479008 | orchestrator | Sunday 29 March 2026 02:44:00 +0000 (0:00:00.425) 0:00:25.167 ********** 2026-03-29 02:44:02.479018 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 02:44:02.479028 | orchestrator | 2026-03-29 02:44:02.479038 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:44:02.479048 | orchestrator | Sunday 29 March 2026 02:44:00 +0000 (0:00:00.263) 0:00:25.430 ********** 2026-03-29 02:44:02.479057 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:02.479067 | orchestrator | 2026-03-29 02:44:02.479078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479087 | orchestrator | Sunday 29 March 2026 02:44:00 +0000 (0:00:00.230) 0:00:25.661 ********** 2026-03-29 02:44:02.479098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 02:44:02.479107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 02:44:02.479116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 02:44:02.479126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 02:44:02.479136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 02:44:02.479146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 02:44:02.479155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 02:44:02.479165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 02:44:02.479176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 02:44:02.479186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 02:44:02.479196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 02:44:02.479206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 02:44:02.479216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 02:44:02.479226 | orchestrator | 2026-03-29 02:44:02.479236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479247 | orchestrator | Sunday 29 March 2026 02:44:00 +0000 (0:00:00.373) 0:00:26.034 ********** 2026-03-29 02:44:02.479258 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479269 | orchestrator | 2026-03-29 02:44:02.479279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479290 | orchestrator | Sunday 29 March 2026 02:44:01 +0000 (0:00:00.202) 0:00:26.236 ********** 2026-03-29 02:44:02.479300 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479310 | orchestrator | 2026-03-29 02:44:02.479321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479331 | orchestrator | Sunday 29 March 2026 02:44:01 +0000 (0:00:00.194) 0:00:26.431 ********** 2026-03-29 02:44:02.479341 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479350 | orchestrator | 2026-03-29 02:44:02.479360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479371 | orchestrator | Sunday 29 March 2026 02:44:01 +0000 (0:00:00.193) 0:00:26.624 ********** 2026-03-29 02:44:02.479381 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479392 | orchestrator | 2026-03-29 02:44:02.479402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479412 | orchestrator | Sunday 29 March 2026 02:44:01 +0000 (0:00:00.191) 0:00:26.816 ********** 2026-03-29 02:44:02.479434 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479445 | orchestrator | 2026-03-29 02:44:02.479455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:02.479465 | orchestrator | Sunday 29 March 2026 02:44:01 +0000 (0:00:00.186) 0:00:27.003 ********** 2026-03-29 02:44:02.479475 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:02.479486 | orchestrator | 2026-03-29 02:44:02.479515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813298 | orchestrator | Sunday 29 March 2026 02:44:02 +0000 (0:00:00.511) 0:00:27.515 ********** 2026-03-29 02:44:12.813423 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.813441 | orchestrator | 2026-03-29 02:44:12.813454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813466 | orchestrator | Sunday 29 March 2026 02:44:02 +0000 (0:00:00.192) 0:00:27.707 ********** 2026-03-29 02:44:12.813477 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.813488 | orchestrator | 2026-03-29 02:44:12.813499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813510 | orchestrator | Sunday 29 March 2026 02:44:02 +0000 (0:00:00.190) 0:00:27.897 ********** 2026-03-29 02:44:12.813520 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb) 2026-03-29 02:44:12.813532 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb) 2026-03-29 02:44:12.813543 | orchestrator | 2026-03-29 02:44:12.813569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813581 | orchestrator | Sunday 29 March 2026 02:44:03 +0000 (0:00:00.415) 0:00:28.313 ********** 2026-03-29 02:44:12.813591 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0) 2026-03-29 02:44:12.813602 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0) 2026-03-29 02:44:12.813613 | orchestrator | 2026-03-29 02:44:12.813624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813635 | orchestrator | Sunday 29 March 2026 02:44:03 +0000 (0:00:00.435) 0:00:28.748 ********** 2026-03-29 02:44:12.813645 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62) 2026-03-29 02:44:12.813656 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62) 2026-03-29 02:44:12.813667 | orchestrator | 2026-03-29 02:44:12.813678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813689 | orchestrator | Sunday 29 March 2026 02:44:04 +0000 (0:00:00.436) 0:00:29.184 ********** 2026-03-29 02:44:12.813700 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a) 2026-03-29 02:44:12.813710 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a) 2026-03-29 02:44:12.813721 | orchestrator | 2026-03-29 02:44:12.813732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:12.813742 | orchestrator | Sunday 29 March 2026 02:44:04 +0000 (0:00:00.420) 0:00:29.604 ********** 2026-03-29 02:44:12.813753 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:44:12.813763 | orchestrator | 2026-03-29 02:44:12.813798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.813810 | orchestrator | Sunday 29 March 2026 02:44:04 +0000 (0:00:00.324) 0:00:29.929 ********** 2026-03-29 02:44:12.813823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 02:44:12.813837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 02:44:12.813849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 02:44:12.813886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 02:44:12.813899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 02:44:12.813912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 02:44:12.813924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 02:44:12.813937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 02:44:12.813949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 02:44:12.813961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 02:44:12.813973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 02:44:12.813985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 02:44:12.813997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 02:44:12.814010 | orchestrator | 2026-03-29 02:44:12.814082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814095 | orchestrator | Sunday 29 March 2026 02:44:05 +0000 (0:00:00.373) 0:00:30.302 ********** 2026-03-29 02:44:12.814107 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814120 | orchestrator | 2026-03-29 02:44:12.814132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814145 | orchestrator | Sunday 29 March 2026 02:44:05 +0000 (0:00:00.238) 0:00:30.541 ********** 2026-03-29 02:44:12.814158 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814171 | orchestrator | 2026-03-29 02:44:12.814182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814193 | orchestrator | Sunday 29 March 2026 02:44:05 +0000 (0:00:00.195) 0:00:30.736 ********** 2026-03-29 02:44:12.814204 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814215 | orchestrator | 2026-03-29 02:44:12.814243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814255 | orchestrator | Sunday 29 March 2026 02:44:06 +0000 (0:00:00.510) 0:00:31.246 ********** 2026-03-29 02:44:12.814266 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814277 | orchestrator | 2026-03-29 02:44:12.814288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814299 | orchestrator | Sunday 29 March 2026 02:44:06 +0000 (0:00:00.198) 0:00:31.445 ********** 2026-03-29 02:44:12.814309 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814320 | orchestrator | 2026-03-29 02:44:12.814331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814342 | orchestrator | Sunday 29 March 2026 02:44:06 +0000 (0:00:00.195) 0:00:31.640 ********** 2026-03-29 02:44:12.814352 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814363 | orchestrator | 2026-03-29 02:44:12.814374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814385 | orchestrator | Sunday 29 March 2026 02:44:06 +0000 (0:00:00.197) 0:00:31.837 ********** 2026-03-29 02:44:12.814401 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814412 | orchestrator | 2026-03-29 02:44:12.814423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814434 | orchestrator | Sunday 29 March 2026 02:44:06 +0000 (0:00:00.200) 0:00:32.037 ********** 2026-03-29 02:44:12.814444 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814455 | orchestrator | 2026-03-29 02:44:12.814466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814476 | orchestrator | Sunday 29 March 2026 02:44:07 +0000 (0:00:00.201) 0:00:32.239 ********** 2026-03-29 02:44:12.814487 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 02:44:12.814507 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 02:44:12.814518 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 02:44:12.814528 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 02:44:12.814539 | orchestrator | 2026-03-29 02:44:12.814550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814561 | orchestrator | Sunday 29 March 2026 02:44:07 +0000 (0:00:00.681) 0:00:32.921 ********** 2026-03-29 02:44:12.814571 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814582 | orchestrator | 2026-03-29 02:44:12.814593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814604 | orchestrator | Sunday 29 March 2026 02:44:08 +0000 (0:00:00.235) 0:00:33.156 ********** 2026-03-29 02:44:12.814614 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814625 | orchestrator | 2026-03-29 02:44:12.814636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814647 | orchestrator | Sunday 29 March 2026 02:44:08 +0000 (0:00:00.215) 0:00:33.372 ********** 2026-03-29 02:44:12.814658 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814668 | orchestrator | 2026-03-29 02:44:12.814679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:12.814690 | orchestrator | Sunday 29 March 2026 02:44:08 +0000 (0:00:00.233) 0:00:33.606 ********** 2026-03-29 02:44:12.814700 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814711 | orchestrator | 2026-03-29 02:44:12.814722 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 02:44:12.814733 | orchestrator | Sunday 29 March 2026 02:44:08 +0000 (0:00:00.213) 0:00:33.819 ********** 2026-03-29 02:44:12.814743 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814754 | orchestrator | 2026-03-29 02:44:12.814765 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 02:44:12.814836 | orchestrator | Sunday 29 March 2026 02:44:09 +0000 (0:00:00.384) 0:00:34.204 ********** 2026-03-29 02:44:12.814849 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df205cf6-8b40-53f0-aec9-c93c6a681056'}}) 2026-03-29 02:44:12.814860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}}) 2026-03-29 02:44:12.814871 | orchestrator | 2026-03-29 02:44:12.814882 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 02:44:12.814892 | orchestrator | Sunday 29 March 2026 02:44:09 +0000 (0:00:00.235) 0:00:34.439 ********** 2026-03-29 02:44:12.814905 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 02:44:12.814917 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 02:44:12.814928 | orchestrator | 2026-03-29 02:44:12.814939 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 02:44:12.814950 | orchestrator | Sunday 29 March 2026 02:44:11 +0000 (0:00:01.877) 0:00:36.317 ********** 2026-03-29 02:44:12.814961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:12.814973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:12.814984 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:12.814995 | orchestrator | 2026-03-29 02:44:12.815006 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 02:44:12.815017 | orchestrator | Sunday 29 March 2026 02:44:11 +0000 (0:00:00.169) 0:00:36.486 ********** 2026-03-29 02:44:12.815028 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 02:44:12.815055 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 02:44:18.822753 | orchestrator | 2026-03-29 02:44:18.822865 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 02:44:18.822873 | orchestrator | Sunday 29 March 2026 02:44:12 +0000 (0:00:01.360) 0:00:37.847 ********** 2026-03-29 02:44:18.822879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.822885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.822890 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.822895 | orchestrator | 2026-03-29 02:44:18.822911 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 02:44:18.822916 | orchestrator | Sunday 29 March 2026 02:44:12 +0000 (0:00:00.166) 0:00:38.014 ********** 2026-03-29 02:44:18.822920 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.822925 | orchestrator | 2026-03-29 02:44:18.822929 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 02:44:18.822933 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.148) 0:00:38.162 ********** 2026-03-29 02:44:18.822937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.822942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.822946 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.822953 | orchestrator | 2026-03-29 02:44:18.822959 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 02:44:18.822966 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.182) 0:00:38.345 ********** 2026-03-29 02:44:18.822972 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.822979 | orchestrator | 2026-03-29 02:44:18.822986 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 02:44:18.822992 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.157) 0:00:38.503 ********** 2026-03-29 02:44:18.822998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.823004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.823011 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823019 | orchestrator | 2026-03-29 02:44:18.823026 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 02:44:18.823033 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.186) 0:00:38.689 ********** 2026-03-29 02:44:18.823040 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823046 | orchestrator | 2026-03-29 02:44:18.823057 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 02:44:18.823062 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.156) 0:00:38.846 ********** 2026-03-29 02:44:18.823066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.823071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.823075 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823079 | orchestrator | 2026-03-29 02:44:18.823084 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 02:44:18.823103 | orchestrator | Sunday 29 March 2026 02:44:13 +0000 (0:00:00.179) 0:00:39.026 ********** 2026-03-29 02:44:18.823108 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:18.823113 | orchestrator | 2026-03-29 02:44:18.823117 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 02:44:18.823122 | orchestrator | Sunday 29 March 2026 02:44:14 +0000 (0:00:00.153) 0:00:39.180 ********** 2026-03-29 02:44:18.823126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.823130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.823134 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823139 | orchestrator | 2026-03-29 02:44:18.823143 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 02:44:18.823147 | orchestrator | Sunday 29 March 2026 02:44:14 +0000 (0:00:00.499) 0:00:39.679 ********** 2026-03-29 02:44:18.823151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.823155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.823159 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823164 | orchestrator | 2026-03-29 02:44:18.823168 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 02:44:18.823183 | orchestrator | Sunday 29 March 2026 02:44:14 +0000 (0:00:00.176) 0:00:39.856 ********** 2026-03-29 02:44:18.823187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:18.823192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:18.823196 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823200 | orchestrator | 2026-03-29 02:44:18.823204 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 02:44:18.823208 | orchestrator | Sunday 29 March 2026 02:44:14 +0000 (0:00:00.160) 0:00:40.017 ********** 2026-03-29 02:44:18.823212 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823217 | orchestrator | 2026-03-29 02:44:18.823225 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 02:44:18.823229 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.149) 0:00:40.167 ********** 2026-03-29 02:44:18.823233 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823237 | orchestrator | 2026-03-29 02:44:18.823241 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 02:44:18.823245 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.152) 0:00:40.319 ********** 2026-03-29 02:44:18.823250 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823254 | orchestrator | 2026-03-29 02:44:18.823258 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 02:44:18.823262 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.154) 0:00:40.474 ********** 2026-03-29 02:44:18.823266 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:44:18.823271 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 02:44:18.823275 | orchestrator | } 2026-03-29 02:44:18.823279 | orchestrator | 2026-03-29 02:44:18.823283 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 02:44:18.823288 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.173) 0:00:40.647 ********** 2026-03-29 02:44:18.823292 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:44:18.823296 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 02:44:18.823305 | orchestrator | } 2026-03-29 02:44:18.823309 | orchestrator | 2026-03-29 02:44:18.823313 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 02:44:18.823317 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.167) 0:00:40.815 ********** 2026-03-29 02:44:18.823322 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:44:18.823327 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 02:44:18.823332 | orchestrator | } 2026-03-29 02:44:18.823337 | orchestrator | 2026-03-29 02:44:18.823342 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 02:44:18.823347 | orchestrator | Sunday 29 March 2026 02:44:15 +0000 (0:00:00.177) 0:00:40.992 ********** 2026-03-29 02:44:18.823351 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:18.823356 | orchestrator | 2026-03-29 02:44:18.823361 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 02:44:18.823366 | orchestrator | Sunday 29 March 2026 02:44:16 +0000 (0:00:00.540) 0:00:41.532 ********** 2026-03-29 02:44:18.823370 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:18.823375 | orchestrator | 2026-03-29 02:44:18.823380 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 02:44:18.823385 | orchestrator | Sunday 29 March 2026 02:44:17 +0000 (0:00:00.537) 0:00:42.070 ********** 2026-03-29 02:44:18.823389 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:18.823394 | orchestrator | 2026-03-29 02:44:18.823399 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 02:44:18.823404 | orchestrator | Sunday 29 March 2026 02:44:17 +0000 (0:00:00.556) 0:00:42.627 ********** 2026-03-29 02:44:18.823409 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:18.823413 | orchestrator | 2026-03-29 02:44:18.823418 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 02:44:18.823423 | orchestrator | Sunday 29 March 2026 02:44:17 +0000 (0:00:00.308) 0:00:42.935 ********** 2026-03-29 02:44:18.823428 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823433 | orchestrator | 2026-03-29 02:44:18.823438 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 02:44:18.823442 | orchestrator | Sunday 29 March 2026 02:44:17 +0000 (0:00:00.108) 0:00:43.043 ********** 2026-03-29 02:44:18.823447 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823452 | orchestrator | 2026-03-29 02:44:18.823457 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 02:44:18.823462 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.116) 0:00:43.160 ********** 2026-03-29 02:44:18.823467 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:44:18.823472 | orchestrator |  "vgs_report": { 2026-03-29 02:44:18.823480 | orchestrator |  "vg": [] 2026-03-29 02:44:18.823487 | orchestrator |  } 2026-03-29 02:44:18.823495 | orchestrator | } 2026-03-29 02:44:18.823502 | orchestrator | 2026-03-29 02:44:18.823508 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 02:44:18.823515 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.149) 0:00:43.310 ********** 2026-03-29 02:44:18.823523 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823531 | orchestrator | 2026-03-29 02:44:18.823538 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 02:44:18.823545 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.131) 0:00:43.442 ********** 2026-03-29 02:44:18.823552 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823558 | orchestrator | 2026-03-29 02:44:18.823565 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 02:44:18.823572 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.144) 0:00:43.586 ********** 2026-03-29 02:44:18.823580 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823587 | orchestrator | 2026-03-29 02:44:18.823594 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 02:44:18.823602 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.136) 0:00:43.723 ********** 2026-03-29 02:44:18.823614 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:18.823622 | orchestrator | 2026-03-29 02:44:18.823634 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 02:44:23.325119 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.136) 0:00:43.860 ********** 2026-03-29 02:44:23.325226 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325240 | orchestrator | 2026-03-29 02:44:23.325249 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 02:44:23.325258 | orchestrator | Sunday 29 March 2026 02:44:18 +0000 (0:00:00.136) 0:00:43.996 ********** 2026-03-29 02:44:23.325265 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325273 | orchestrator | 2026-03-29 02:44:23.325280 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 02:44:23.325288 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.133) 0:00:44.130 ********** 2026-03-29 02:44:23.325295 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325303 | orchestrator | 2026-03-29 02:44:23.325324 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 02:44:23.325332 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.130) 0:00:44.261 ********** 2026-03-29 02:44:23.325339 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325346 | orchestrator | 2026-03-29 02:44:23.325354 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 02:44:23.325361 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.133) 0:00:44.395 ********** 2026-03-29 02:44:23.325368 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325375 | orchestrator | 2026-03-29 02:44:23.325382 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 02:44:23.325390 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.301) 0:00:44.696 ********** 2026-03-29 02:44:23.325397 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325404 | orchestrator | 2026-03-29 02:44:23.325411 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 02:44:23.325419 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.134) 0:00:44.831 ********** 2026-03-29 02:44:23.325426 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325433 | orchestrator | 2026-03-29 02:44:23.325446 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 02:44:23.325457 | orchestrator | Sunday 29 March 2026 02:44:19 +0000 (0:00:00.140) 0:00:44.971 ********** 2026-03-29 02:44:23.325469 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325480 | orchestrator | 2026-03-29 02:44:23.325491 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 02:44:23.325503 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.131) 0:00:45.103 ********** 2026-03-29 02:44:23.325514 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325527 | orchestrator | 2026-03-29 02:44:23.325539 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 02:44:23.325551 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.131) 0:00:45.234 ********** 2026-03-29 02:44:23.325563 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325575 | orchestrator | 2026-03-29 02:44:23.325586 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 02:44:23.325598 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.124) 0:00:45.358 ********** 2026-03-29 02:44:23.325610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.325624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.325637 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325649 | orchestrator | 2026-03-29 02:44:23.325661 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 02:44:23.325700 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.148) 0:00:45.507 ********** 2026-03-29 02:44:23.325713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.325726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.325739 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325751 | orchestrator | 2026-03-29 02:44:23.325763 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 02:44:23.325775 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.148) 0:00:45.655 ********** 2026-03-29 02:44:23.325853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.325867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.325879 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325894 | orchestrator | 2026-03-29 02:44:23.325901 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 02:44:23.325909 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.134) 0:00:45.790 ********** 2026-03-29 02:44:23.325916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.325924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.325931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.325938 | orchestrator | 2026-03-29 02:44:23.325964 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 02:44:23.325972 | orchestrator | Sunday 29 March 2026 02:44:20 +0000 (0:00:00.140) 0:00:45.930 ********** 2026-03-29 02:44:23.325979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.325986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.325994 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.326001 | orchestrator | 2026-03-29 02:44:23.326062 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 02:44:23.326071 | orchestrator | Sunday 29 March 2026 02:44:21 +0000 (0:00:00.153) 0:00:46.084 ********** 2026-03-29 02:44:23.326079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.326086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.326094 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.326101 | orchestrator | 2026-03-29 02:44:23.326108 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 02:44:23.326116 | orchestrator | Sunday 29 March 2026 02:44:21 +0000 (0:00:00.134) 0:00:46.219 ********** 2026-03-29 02:44:23.326123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.326130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.326137 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.326145 | orchestrator | 2026-03-29 02:44:23.326161 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 02:44:23.326168 | orchestrator | Sunday 29 March 2026 02:44:21 +0000 (0:00:00.339) 0:00:46.559 ********** 2026-03-29 02:44:23.326176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.326183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.326190 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.326197 | orchestrator | 2026-03-29 02:44:23.326205 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 02:44:23.326212 | orchestrator | Sunday 29 March 2026 02:44:21 +0000 (0:00:00.152) 0:00:46.711 ********** 2026-03-29 02:44:23.326219 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:23.326227 | orchestrator | 2026-03-29 02:44:23.326234 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 02:44:23.326241 | orchestrator | Sunday 29 March 2026 02:44:22 +0000 (0:00:00.497) 0:00:47.208 ********** 2026-03-29 02:44:23.326248 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:23.326255 | orchestrator | 2026-03-29 02:44:23.326262 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 02:44:23.326270 | orchestrator | Sunday 29 March 2026 02:44:22 +0000 (0:00:00.530) 0:00:47.739 ********** 2026-03-29 02:44:23.326277 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:44:23.326284 | orchestrator | 2026-03-29 02:44:23.326291 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 02:44:23.326298 | orchestrator | Sunday 29 March 2026 02:44:22 +0000 (0:00:00.150) 0:00:47.890 ********** 2026-03-29 02:44:23.326306 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'vg_name': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 02:44:23.326314 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'vg_name': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 02:44:23.326321 | orchestrator | 2026-03-29 02:44:23.326329 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 02:44:23.326336 | orchestrator | Sunday 29 March 2026 02:44:23 +0000 (0:00:00.179) 0:00:48.069 ********** 2026-03-29 02:44:23.326343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.326350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:23.326358 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:23.326365 | orchestrator | 2026-03-29 02:44:23.326372 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 02:44:23.326380 | orchestrator | Sunday 29 March 2026 02:44:23 +0000 (0:00:00.143) 0:00:48.213 ********** 2026-03-29 02:44:23.326387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:23.326399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:30.309502 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:30.309630 | orchestrator | 2026-03-29 02:44:30.309646 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 02:44:30.309656 | orchestrator | Sunday 29 March 2026 02:44:23 +0000 (0:00:00.149) 0:00:48.362 ********** 2026-03-29 02:44:30.309665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 02:44:30.309709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 02:44:30.309718 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:44:30.309726 | orchestrator | 2026-03-29 02:44:30.309735 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 02:44:30.309743 | orchestrator | Sunday 29 March 2026 02:44:23 +0000 (0:00:00.173) 0:00:48.536 ********** 2026-03-29 02:44:30.309751 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 02:44:30.309759 | orchestrator |  "lvm_report": { 2026-03-29 02:44:30.309769 | orchestrator |  "lv": [ 2026-03-29 02:44:30.309777 | orchestrator |  { 2026-03-29 02:44:30.309813 | orchestrator |  "lv_name": "osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056", 2026-03-29 02:44:30.309823 | orchestrator |  "vg_name": "ceph-df205cf6-8b40-53f0-aec9-c93c6a681056" 2026-03-29 02:44:30.309831 | orchestrator |  }, 2026-03-29 02:44:30.309839 | orchestrator |  { 2026-03-29 02:44:30.309847 | orchestrator |  "lv_name": "osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948", 2026-03-29 02:44:30.309855 | orchestrator |  "vg_name": "ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948" 2026-03-29 02:44:30.309862 | orchestrator |  } 2026-03-29 02:44:30.309871 | orchestrator |  ], 2026-03-29 02:44:30.309879 | orchestrator |  "pv": [ 2026-03-29 02:44:30.309887 | orchestrator |  { 2026-03-29 02:44:30.309894 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 02:44:30.309902 | orchestrator |  "vg_name": "ceph-df205cf6-8b40-53f0-aec9-c93c6a681056" 2026-03-29 02:44:30.309911 | orchestrator |  }, 2026-03-29 02:44:30.309919 | orchestrator |  { 2026-03-29 02:44:30.309927 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 02:44:30.309935 | orchestrator |  "vg_name": "ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948" 2026-03-29 02:44:30.309943 | orchestrator |  } 2026-03-29 02:44:30.309951 | orchestrator |  ] 2026-03-29 02:44:30.309959 | orchestrator |  } 2026-03-29 02:44:30.309967 | orchestrator | } 2026-03-29 02:44:30.309975 | orchestrator | 2026-03-29 02:44:30.309983 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 02:44:30.309991 | orchestrator | 2026-03-29 02:44:30.309999 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 02:44:30.310007 | orchestrator | Sunday 29 March 2026 02:44:23 +0000 (0:00:00.314) 0:00:48.851 ********** 2026-03-29 02:44:30.310063 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 02:44:30.310074 | orchestrator | 2026-03-29 02:44:30.310084 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 02:44:30.310092 | orchestrator | Sunday 29 March 2026 02:44:24 +0000 (0:00:00.857) 0:00:49.708 ********** 2026-03-29 02:44:30.310102 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:30.310111 | orchestrator | 2026-03-29 02:44:30.310120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310129 | orchestrator | Sunday 29 March 2026 02:44:24 +0000 (0:00:00.256) 0:00:49.964 ********** 2026-03-29 02:44:30.310139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 02:44:30.310148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 02:44:30.310157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 02:44:30.310166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 02:44:30.310174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 02:44:30.310182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 02:44:30.310189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 02:44:30.310205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 02:44:30.310213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 02:44:30.310221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 02:44:30.310229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 02:44:30.310237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 02:44:30.310245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 02:44:30.310253 | orchestrator | 2026-03-29 02:44:30.310261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310268 | orchestrator | Sunday 29 March 2026 02:44:25 +0000 (0:00:00.426) 0:00:50.391 ********** 2026-03-29 02:44:30.310276 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310284 | orchestrator | 2026-03-29 02:44:30.310292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310300 | orchestrator | Sunday 29 March 2026 02:44:25 +0000 (0:00:00.230) 0:00:50.621 ********** 2026-03-29 02:44:30.310308 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310316 | orchestrator | 2026-03-29 02:44:30.310324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310348 | orchestrator | Sunday 29 March 2026 02:44:25 +0000 (0:00:00.217) 0:00:50.838 ********** 2026-03-29 02:44:30.310357 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310365 | orchestrator | 2026-03-29 02:44:30.310373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310380 | orchestrator | Sunday 29 March 2026 02:44:26 +0000 (0:00:00.217) 0:00:51.056 ********** 2026-03-29 02:44:30.310388 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310396 | orchestrator | 2026-03-29 02:44:30.310404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310412 | orchestrator | Sunday 29 March 2026 02:44:26 +0000 (0:00:00.209) 0:00:51.265 ********** 2026-03-29 02:44:30.310420 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310431 | orchestrator | 2026-03-29 02:44:30.310444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310457 | orchestrator | Sunday 29 March 2026 02:44:26 +0000 (0:00:00.210) 0:00:51.476 ********** 2026-03-29 02:44:30.310470 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310482 | orchestrator | 2026-03-29 02:44:30.310495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310507 | orchestrator | Sunday 29 March 2026 02:44:26 +0000 (0:00:00.207) 0:00:51.683 ********** 2026-03-29 02:44:30.310519 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310533 | orchestrator | 2026-03-29 02:44:30.310546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310559 | orchestrator | Sunday 29 March 2026 02:44:26 +0000 (0:00:00.227) 0:00:51.910 ********** 2026-03-29 02:44:30.310570 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:30.310578 | orchestrator | 2026-03-29 02:44:30.310586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310594 | orchestrator | Sunday 29 March 2026 02:44:27 +0000 (0:00:00.210) 0:00:52.120 ********** 2026-03-29 02:44:30.310602 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6) 2026-03-29 02:44:30.310611 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6) 2026-03-29 02:44:30.310619 | orchestrator | 2026-03-29 02:44:30.310631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310644 | orchestrator | Sunday 29 March 2026 02:44:28 +0000 (0:00:01.021) 0:00:53.142 ********** 2026-03-29 02:44:30.310707 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735) 2026-03-29 02:44:30.310731 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735) 2026-03-29 02:44:30.310745 | orchestrator | 2026-03-29 02:44:30.310757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310770 | orchestrator | Sunday 29 March 2026 02:44:28 +0000 (0:00:00.490) 0:00:53.633 ********** 2026-03-29 02:44:30.310783 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa) 2026-03-29 02:44:30.310822 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa) 2026-03-29 02:44:30.310835 | orchestrator | 2026-03-29 02:44:30.310847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310862 | orchestrator | Sunday 29 March 2026 02:44:29 +0000 (0:00:00.472) 0:00:54.105 ********** 2026-03-29 02:44:30.310875 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b) 2026-03-29 02:44:30.310887 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b) 2026-03-29 02:44:30.310899 | orchestrator | 2026-03-29 02:44:30.310908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 02:44:30.310916 | orchestrator | Sunday 29 March 2026 02:44:29 +0000 (0:00:00.455) 0:00:54.561 ********** 2026-03-29 02:44:30.310924 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 02:44:30.310944 | orchestrator | 2026-03-29 02:44:30.310952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:30.310970 | orchestrator | Sunday 29 March 2026 02:44:29 +0000 (0:00:00.351) 0:00:54.912 ********** 2026-03-29 02:44:30.310978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 02:44:30.310986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 02:44:30.310994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 02:44:30.311002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 02:44:30.311009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 02:44:30.311017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 02:44:30.311025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 02:44:30.311033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 02:44:30.311041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 02:44:30.311048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 02:44:30.311056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 02:44:30.311072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 02:44:39.522633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 02:44:39.522758 | orchestrator | 2026-03-29 02:44:39.522781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.522827 | orchestrator | Sunday 29 March 2026 02:44:30 +0000 (0:00:00.426) 0:00:55.339 ********** 2026-03-29 02:44:39.522841 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.522856 | orchestrator | 2026-03-29 02:44:39.522869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.522902 | orchestrator | Sunday 29 March 2026 02:44:30 +0000 (0:00:00.238) 0:00:55.578 ********** 2026-03-29 02:44:39.522917 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.522958 | orchestrator | 2026-03-29 02:44:39.522972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.522984 | orchestrator | Sunday 29 March 2026 02:44:30 +0000 (0:00:00.232) 0:00:55.810 ********** 2026-03-29 02:44:39.522999 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523012 | orchestrator | 2026-03-29 02:44:39.523024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523038 | orchestrator | Sunday 29 March 2026 02:44:30 +0000 (0:00:00.217) 0:00:56.028 ********** 2026-03-29 02:44:39.523051 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523063 | orchestrator | 2026-03-29 02:44:39.523075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523088 | orchestrator | Sunday 29 March 2026 02:44:31 +0000 (0:00:00.231) 0:00:56.259 ********** 2026-03-29 02:44:39.523101 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523114 | orchestrator | 2026-03-29 02:44:39.523128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523141 | orchestrator | Sunday 29 March 2026 02:44:31 +0000 (0:00:00.768) 0:00:57.027 ********** 2026-03-29 02:44:39.523155 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523167 | orchestrator | 2026-03-29 02:44:39.523181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523194 | orchestrator | Sunday 29 March 2026 02:44:32 +0000 (0:00:00.246) 0:00:57.273 ********** 2026-03-29 02:44:39.523207 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523220 | orchestrator | 2026-03-29 02:44:39.523232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523247 | orchestrator | Sunday 29 March 2026 02:44:32 +0000 (0:00:00.223) 0:00:57.496 ********** 2026-03-29 02:44:39.523261 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523275 | orchestrator | 2026-03-29 02:44:39.523288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523301 | orchestrator | Sunday 29 March 2026 02:44:32 +0000 (0:00:00.240) 0:00:57.737 ********** 2026-03-29 02:44:39.523316 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 02:44:39.523331 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 02:44:39.523345 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 02:44:39.523358 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 02:44:39.523369 | orchestrator | 2026-03-29 02:44:39.523379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523388 | orchestrator | Sunday 29 March 2026 02:44:33 +0000 (0:00:00.691) 0:00:58.428 ********** 2026-03-29 02:44:39.523398 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523406 | orchestrator | 2026-03-29 02:44:39.523414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523422 | orchestrator | Sunday 29 March 2026 02:44:33 +0000 (0:00:00.211) 0:00:58.640 ********** 2026-03-29 02:44:39.523435 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523449 | orchestrator | 2026-03-29 02:44:39.523461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523473 | orchestrator | Sunday 29 March 2026 02:44:33 +0000 (0:00:00.221) 0:00:58.862 ********** 2026-03-29 02:44:39.523486 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523497 | orchestrator | 2026-03-29 02:44:39.523508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 02:44:39.523520 | orchestrator | Sunday 29 March 2026 02:44:34 +0000 (0:00:00.225) 0:00:59.088 ********** 2026-03-29 02:44:39.523534 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523548 | orchestrator | 2026-03-29 02:44:39.523561 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 02:44:39.523575 | orchestrator | Sunday 29 March 2026 02:44:34 +0000 (0:00:00.208) 0:00:59.297 ********** 2026-03-29 02:44:39.523588 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523602 | orchestrator | 2026-03-29 02:44:39.523623 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 02:44:39.523631 | orchestrator | Sunday 29 March 2026 02:44:34 +0000 (0:00:00.135) 0:00:59.432 ********** 2026-03-29 02:44:39.523640 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}}) 2026-03-29 02:44:39.523649 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}}) 2026-03-29 02:44:39.523656 | orchestrator | 2026-03-29 02:44:39.523664 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 02:44:39.523675 | orchestrator | Sunday 29 March 2026 02:44:34 +0000 (0:00:00.198) 0:00:59.630 ********** 2026-03-29 02:44:39.523690 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 02:44:39.523705 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 02:44:39.523717 | orchestrator | 2026-03-29 02:44:39.523730 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 02:44:39.523820 | orchestrator | Sunday 29 March 2026 02:44:36 +0000 (0:00:01.905) 0:01:01.536 ********** 2026-03-29 02:44:39.523836 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:39.523851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:39.523864 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.523877 | orchestrator | 2026-03-29 02:44:39.523901 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 02:44:39.523915 | orchestrator | Sunday 29 March 2026 02:44:36 +0000 (0:00:00.331) 0:01:01.868 ********** 2026-03-29 02:44:39.523927 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 02:44:39.523939 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 02:44:39.523951 | orchestrator | 2026-03-29 02:44:39.523964 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 02:44:39.523976 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:01.370) 0:01:03.239 ********** 2026-03-29 02:44:39.523988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:39.524001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:39.524013 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524027 | orchestrator | 2026-03-29 02:44:39.524039 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 02:44:39.524053 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:00.158) 0:01:03.397 ********** 2026-03-29 02:44:39.524066 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524079 | orchestrator | 2026-03-29 02:44:39.524092 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 02:44:39.524105 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:00.149) 0:01:03.546 ********** 2026-03-29 02:44:39.524117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:39.524132 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:39.524158 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524171 | orchestrator | 2026-03-29 02:44:39.524185 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 02:44:39.524199 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:00.151) 0:01:03.698 ********** 2026-03-29 02:44:39.524213 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524227 | orchestrator | 2026-03-29 02:44:39.524240 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 02:44:39.524253 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:00.146) 0:01:03.844 ********** 2026-03-29 02:44:39.524266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:39.524280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:39.524293 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524307 | orchestrator | 2026-03-29 02:44:39.524321 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 02:44:39.524335 | orchestrator | Sunday 29 March 2026 02:44:38 +0000 (0:00:00.151) 0:01:03.996 ********** 2026-03-29 02:44:39.524350 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524364 | orchestrator | 2026-03-29 02:44:39.524378 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 02:44:39.524390 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.136) 0:01:04.132 ********** 2026-03-29 02:44:39.524405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:39.524419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:39.524433 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:39.524447 | orchestrator | 2026-03-29 02:44:39.524461 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 02:44:39.524475 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.134) 0:01:04.267 ********** 2026-03-29 02:44:39.524488 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:39.524501 | orchestrator | 2026-03-29 02:44:39.524515 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 02:44:39.524528 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.143) 0:01:04.410 ********** 2026-03-29 02:44:39.524554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:45.909350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:45.909465 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909483 | orchestrator | 2026-03-29 02:44:45.909498 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 02:44:45.909511 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.151) 0:01:04.561 ********** 2026-03-29 02:44:45.909539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:45.909551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:45.909580 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909592 | orchestrator | 2026-03-29 02:44:45.909614 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 02:44:45.909626 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.141) 0:01:04.703 ********** 2026-03-29 02:44:45.909660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:45.909672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:45.909683 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909694 | orchestrator | 2026-03-29 02:44:45.909705 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 02:44:45.909715 | orchestrator | Sunday 29 March 2026 02:44:39 +0000 (0:00:00.304) 0:01:05.008 ********** 2026-03-29 02:44:45.909726 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909737 | orchestrator | 2026-03-29 02:44:45.909748 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 02:44:45.909759 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.133) 0:01:05.141 ********** 2026-03-29 02:44:45.909769 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909781 | orchestrator | 2026-03-29 02:44:45.909792 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 02:44:45.909916 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.132) 0:01:05.274 ********** 2026-03-29 02:44:45.909930 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.909944 | orchestrator | 2026-03-29 02:44:45.909956 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 02:44:45.909969 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.137) 0:01:05.411 ********** 2026-03-29 02:44:45.909982 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:44:45.909995 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 02:44:45.910007 | orchestrator | } 2026-03-29 02:44:45.910078 | orchestrator | 2026-03-29 02:44:45.910090 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 02:44:45.910101 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.146) 0:01:05.558 ********** 2026-03-29 02:44:45.910111 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:44:45.910122 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 02:44:45.910133 | orchestrator | } 2026-03-29 02:44:45.910144 | orchestrator | 2026-03-29 02:44:45.910154 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 02:44:45.910165 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.138) 0:01:05.696 ********** 2026-03-29 02:44:45.910176 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:44:45.910187 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 02:44:45.910198 | orchestrator | } 2026-03-29 02:44:45.910209 | orchestrator | 2026-03-29 02:44:45.910219 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 02:44:45.910230 | orchestrator | Sunday 29 March 2026 02:44:40 +0000 (0:00:00.135) 0:01:05.832 ********** 2026-03-29 02:44:45.910241 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:45.910252 | orchestrator | 2026-03-29 02:44:45.910262 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 02:44:45.910273 | orchestrator | Sunday 29 March 2026 02:44:41 +0000 (0:00:00.536) 0:01:06.368 ********** 2026-03-29 02:44:45.910284 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:45.910294 | orchestrator | 2026-03-29 02:44:45.910305 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 02:44:45.910316 | orchestrator | Sunday 29 March 2026 02:44:41 +0000 (0:00:00.530) 0:01:06.899 ********** 2026-03-29 02:44:45.910326 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:45.910337 | orchestrator | 2026-03-29 02:44:45.910348 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 02:44:45.910358 | orchestrator | Sunday 29 March 2026 02:44:42 +0000 (0:00:00.512) 0:01:07.412 ********** 2026-03-29 02:44:45.910369 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:45.910380 | orchestrator | 2026-03-29 02:44:45.910390 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 02:44:45.910411 | orchestrator | Sunday 29 March 2026 02:44:42 +0000 (0:00:00.141) 0:01:07.554 ********** 2026-03-29 02:44:45.910422 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.910432 | orchestrator | 2026-03-29 02:44:45.910443 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 02:44:45.910462 | orchestrator | Sunday 29 March 2026 02:44:42 +0000 (0:00:00.102) 0:01:07.656 ********** 2026-03-29 02:44:45.910481 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.910505 | orchestrator | 2026-03-29 02:44:45.910529 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 02:44:45.910547 | orchestrator | Sunday 29 March 2026 02:44:42 +0000 (0:00:00.288) 0:01:07.945 ********** 2026-03-29 02:44:45.910565 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:44:45.910582 | orchestrator |  "vgs_report": { 2026-03-29 02:44:45.910602 | orchestrator |  "vg": [] 2026-03-29 02:44:45.910644 | orchestrator |  } 2026-03-29 02:44:45.910665 | orchestrator | } 2026-03-29 02:44:45.910683 | orchestrator | 2026-03-29 02:44:45.910701 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 02:44:45.910717 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.138) 0:01:08.084 ********** 2026-03-29 02:44:45.910736 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.910753 | orchestrator | 2026-03-29 02:44:45.910770 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 02:44:45.910788 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.133) 0:01:08.218 ********** 2026-03-29 02:44:45.910846 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.910864 | orchestrator | 2026-03-29 02:44:45.910883 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 02:44:45.910902 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.139) 0:01:08.357 ********** 2026-03-29 02:44:45.910922 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.910938 | orchestrator | 2026-03-29 02:44:45.910957 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 02:44:45.910976 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.116) 0:01:08.474 ********** 2026-03-29 02:44:45.910994 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911011 | orchestrator | 2026-03-29 02:44:45.911030 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 02:44:45.911049 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.130) 0:01:08.604 ********** 2026-03-29 02:44:45.911067 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911087 | orchestrator | 2026-03-29 02:44:45.911106 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 02:44:45.911125 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.142) 0:01:08.747 ********** 2026-03-29 02:44:45.911143 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911161 | orchestrator | 2026-03-29 02:44:45.911180 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 02:44:45.911197 | orchestrator | Sunday 29 March 2026 02:44:43 +0000 (0:00:00.155) 0:01:08.902 ********** 2026-03-29 02:44:45.911215 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911234 | orchestrator | 2026-03-29 02:44:45.911252 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 02:44:45.911271 | orchestrator | Sunday 29 March 2026 02:44:44 +0000 (0:00:00.181) 0:01:09.084 ********** 2026-03-29 02:44:45.911289 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911307 | orchestrator | 2026-03-29 02:44:45.911324 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 02:44:45.911343 | orchestrator | Sunday 29 March 2026 02:44:44 +0000 (0:00:00.147) 0:01:09.231 ********** 2026-03-29 02:44:45.911360 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911378 | orchestrator | 2026-03-29 02:44:45.911396 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 02:44:45.911415 | orchestrator | Sunday 29 March 2026 02:44:44 +0000 (0:00:00.171) 0:01:09.403 ********** 2026-03-29 02:44:45.911447 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911466 | orchestrator | 2026-03-29 02:44:45.911484 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 02:44:45.911503 | orchestrator | Sunday 29 March 2026 02:44:44 +0000 (0:00:00.154) 0:01:09.557 ********** 2026-03-29 02:44:45.911521 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911539 | orchestrator | 2026-03-29 02:44:45.911557 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 02:44:45.911576 | orchestrator | Sunday 29 March 2026 02:44:44 +0000 (0:00:00.441) 0:01:09.999 ********** 2026-03-29 02:44:45.911595 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911612 | orchestrator | 2026-03-29 02:44:45.911630 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 02:44:45.911642 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.153) 0:01:10.153 ********** 2026-03-29 02:44:45.911653 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911663 | orchestrator | 2026-03-29 02:44:45.911674 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 02:44:45.911685 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.143) 0:01:10.297 ********** 2026-03-29 02:44:45.911696 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911706 | orchestrator | 2026-03-29 02:44:45.911717 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 02:44:45.911728 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.164) 0:01:10.461 ********** 2026-03-29 02:44:45.911739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:45.911752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:45.911763 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911773 | orchestrator | 2026-03-29 02:44:45.911784 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 02:44:45.911827 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.163) 0:01:10.625 ********** 2026-03-29 02:44:45.911843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:45.911854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:45.911865 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:45.911876 | orchestrator | 2026-03-29 02:44:45.911887 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 02:44:45.911898 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.159) 0:01:10.784 ********** 2026-03-29 02:44:45.911920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050436 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050458 | orchestrator | 2026-03-29 02:44:49.050490 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 02:44:49.050504 | orchestrator | Sunday 29 March 2026 02:44:45 +0000 (0:00:00.162) 0:01:10.947 ********** 2026-03-29 02:44:49.050515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050560 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050571 | orchestrator | 2026-03-29 02:44:49.050583 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 02:44:49.050595 | orchestrator | Sunday 29 March 2026 02:44:46 +0000 (0:00:00.165) 0:01:11.113 ********** 2026-03-29 02:44:49.050607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050629 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050640 | orchestrator | 2026-03-29 02:44:49.050653 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 02:44:49.050665 | orchestrator | Sunday 29 March 2026 02:44:46 +0000 (0:00:00.163) 0:01:11.277 ********** 2026-03-29 02:44:49.050677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050700 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050710 | orchestrator | 2026-03-29 02:44:49.050717 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 02:44:49.050724 | orchestrator | Sunday 29 March 2026 02:44:46 +0000 (0:00:00.172) 0:01:11.449 ********** 2026-03-29 02:44:49.050731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050744 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050750 | orchestrator | 2026-03-29 02:44:49.050757 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 02:44:49.050764 | orchestrator | Sunday 29 March 2026 02:44:46 +0000 (0:00:00.165) 0:01:11.615 ********** 2026-03-29 02:44:49.050770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.050777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.050784 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.050790 | orchestrator | 2026-03-29 02:44:49.050797 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 02:44:49.050863 | orchestrator | Sunday 29 March 2026 02:44:46 +0000 (0:00:00.166) 0:01:11.782 ********** 2026-03-29 02:44:49.050876 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:49.050888 | orchestrator | 2026-03-29 02:44:49.050900 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 02:44:49.050910 | orchestrator | Sunday 29 March 2026 02:44:47 +0000 (0:00:00.825) 0:01:12.608 ********** 2026-03-29 02:44:49.050920 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:49.050931 | orchestrator | 2026-03-29 02:44:49.050942 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 02:44:49.050954 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.559) 0:01:13.168 ********** 2026-03-29 02:44:49.050964 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:44:49.050976 | orchestrator | 2026-03-29 02:44:49.050987 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 02:44:49.050998 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.153) 0:01:13.321 ********** 2026-03-29 02:44:49.051020 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'vg_name': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 02:44:49.051032 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'vg_name': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 02:44:49.051043 | orchestrator | 2026-03-29 02:44:49.051055 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 02:44:49.051067 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.167) 0:01:13.489 ********** 2026-03-29 02:44:49.051100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.051118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.051125 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.051132 | orchestrator | 2026-03-29 02:44:49.051139 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 02:44:49.051145 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.157) 0:01:13.646 ********** 2026-03-29 02:44:49.051152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.051159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.051165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.051172 | orchestrator | 2026-03-29 02:44:49.051179 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 02:44:49.051185 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.138) 0:01:13.785 ********** 2026-03-29 02:44:49.051192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 02:44:49.051198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 02:44:49.051205 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:44:49.051211 | orchestrator | 2026-03-29 02:44:49.051218 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 02:44:49.051224 | orchestrator | Sunday 29 March 2026 02:44:48 +0000 (0:00:00.149) 0:01:13.934 ********** 2026-03-29 02:44:49.051231 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 02:44:49.051238 | orchestrator |  "lvm_report": { 2026-03-29 02:44:49.051245 | orchestrator |  "lv": [ 2026-03-29 02:44:49.051252 | orchestrator |  { 2026-03-29 02:44:49.051259 | orchestrator |  "lv_name": "osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844", 2026-03-29 02:44:49.051267 | orchestrator |  "vg_name": "ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844" 2026-03-29 02:44:49.051273 | orchestrator |  }, 2026-03-29 02:44:49.051280 | orchestrator |  { 2026-03-29 02:44:49.051286 | orchestrator |  "lv_name": "osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33", 2026-03-29 02:44:49.051293 | orchestrator |  "vg_name": "ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33" 2026-03-29 02:44:49.051300 | orchestrator |  } 2026-03-29 02:44:49.051306 | orchestrator |  ], 2026-03-29 02:44:49.051313 | orchestrator |  "pv": [ 2026-03-29 02:44:49.051319 | orchestrator |  { 2026-03-29 02:44:49.051326 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 02:44:49.051333 | orchestrator |  "vg_name": "ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844" 2026-03-29 02:44:49.051339 | orchestrator |  }, 2026-03-29 02:44:49.051346 | orchestrator |  { 2026-03-29 02:44:49.051352 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 02:44:49.051369 | orchestrator |  "vg_name": "ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33" 2026-03-29 02:44:49.051376 | orchestrator |  } 2026-03-29 02:44:49.051382 | orchestrator |  ] 2026-03-29 02:44:49.051388 | orchestrator |  } 2026-03-29 02:44:49.051395 | orchestrator | } 2026-03-29 02:44:49.051402 | orchestrator | 2026-03-29 02:44:49.051409 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:44:49.051416 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 02:44:49.051423 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 02:44:49.051429 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 02:44:49.051436 | orchestrator | 2026-03-29 02:44:49.051442 | orchestrator | 2026-03-29 02:44:49.051449 | orchestrator | 2026-03-29 02:44:49.051455 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:44:49.051462 | orchestrator | Sunday 29 March 2026 02:44:49 +0000 (0:00:00.133) 0:01:14.067 ********** 2026-03-29 02:44:49.051469 | orchestrator | =============================================================================== 2026-03-29 02:44:49.051475 | orchestrator | Create block VGs -------------------------------------------------------- 5.78s 2026-03-29 02:44:49.051482 | orchestrator | Create block LVs -------------------------------------------------------- 4.30s 2026-03-29 02:44:49.051488 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.86s 2026-03-29 02:44:49.051495 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-03-29 02:44:49.051501 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-03-29 02:44:49.051507 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2026-03-29 02:44:49.051514 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.60s 2026-03-29 02:44:49.051521 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.39s 2026-03-29 02:44:49.051532 | orchestrator | Add known links to the list of available block devices ------------------ 1.35s 2026-03-29 02:44:49.288282 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-03-29 02:44:49.288385 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-03-29 02:44:49.288403 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-03-29 02:44:49.288444 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-03-29 02:44:49.288464 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.82s 2026-03-29 02:44:49.288483 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.79s 2026-03-29 02:44:49.288515 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-03-29 02:44:49.288528 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.75s 2026-03-29 02:44:49.288539 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-29 02:44:49.288550 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-03-29 02:44:49.288561 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-03-29 02:45:01.495274 | orchestrator | 2026-03-29 02:45:01 | INFO  | Task 71d508f9-0a0d-41f9-b442-c40437ca18bc (facts) was prepared for execution. 2026-03-29 02:45:01.495369 | orchestrator | 2026-03-29 02:45:01 | INFO  | It takes a moment until task 71d508f9-0a0d-41f9-b442-c40437ca18bc (facts) has been started and output is visible here. 2026-03-29 02:45:15.204312 | orchestrator | 2026-03-29 02:45:15.204455 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 02:45:15.204522 | orchestrator | 2026-03-29 02:45:15.204544 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 02:45:15.204599 | orchestrator | Sunday 29 March 2026 02:45:05 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-03-29 02:45:15.204619 | orchestrator | ok: [testbed-manager] 2026-03-29 02:45:15.204639 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:15.204658 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:15.204676 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:15.204694 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:15.204713 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:15.204731 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:15.204749 | orchestrator | 2026-03-29 02:45:15.204767 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 02:45:15.204786 | orchestrator | Sunday 29 March 2026 02:45:06 +0000 (0:00:01.254) 0:00:01.506 ********** 2026-03-29 02:45:15.204805 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:45:15.204853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:15.204874 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:15.204891 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:15.204910 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:15.204929 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:15.204949 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:15.204967 | orchestrator | 2026-03-29 02:45:15.204986 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 02:45:15.205005 | orchestrator | 2026-03-29 02:45:15.205024 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 02:45:15.205042 | orchestrator | Sunday 29 March 2026 02:45:07 +0000 (0:00:01.276) 0:00:02.783 ********** 2026-03-29 02:45:15.205060 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:15.205079 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:15.205098 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:15.205117 | orchestrator | ok: [testbed-manager] 2026-03-29 02:45:15.205135 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:15.205153 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:15.205171 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:15.205189 | orchestrator | 2026-03-29 02:45:15.205208 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 02:45:15.205226 | orchestrator | 2026-03-29 02:45:15.205244 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 02:45:15.205263 | orchestrator | Sunday 29 March 2026 02:45:14 +0000 (0:00:06.144) 0:00:08.928 ********** 2026-03-29 02:45:15.205281 | orchestrator | skipping: [testbed-manager] 2026-03-29 02:45:15.205298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:15.205316 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:15.205333 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:15.205349 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:15.205365 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:15.205383 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:15.205400 | orchestrator | 2026-03-29 02:45:15.205417 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:45:15.205435 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205454 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205473 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205491 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205509 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205545 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205562 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 02:45:15.205578 | orchestrator | 2026-03-29 02:45:15.205595 | orchestrator | 2026-03-29 02:45:15.205613 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:45:15.205649 | orchestrator | Sunday 29 March 2026 02:45:14 +0000 (0:00:00.623) 0:00:09.551 ********** 2026-03-29 02:45:15.205668 | orchestrator | =============================================================================== 2026-03-29 02:45:15.205706 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.14s 2026-03-29 02:45:15.205738 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-29 02:45:15.205755 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-03-29 02:45:15.205773 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-03-29 02:45:17.406296 | orchestrator | 2026-03-29 02:45:17 | INFO  | Task 3329d472-27a5-4ddb-9553-0f6fd2e31caa (ceph) was prepared for execution. 2026-03-29 02:45:17.406374 | orchestrator | 2026-03-29 02:45:17 | INFO  | It takes a moment until task 3329d472-27a5-4ddb-9553-0f6fd2e31caa (ceph) has been started and output is visible here. 2026-03-29 02:45:34.316034 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 02:45:34.316130 | orchestrator | 2.16.14 2026-03-29 02:45:34.316143 | orchestrator | 2026-03-29 02:45:34.316151 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-29 02:45:34.316159 | orchestrator | 2026-03-29 02:45:34.316166 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 02:45:34.316173 | orchestrator | Sunday 29 March 2026 02:45:22 +0000 (0:00:00.750) 0:00:00.750 ********** 2026-03-29 02:45:34.316181 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:45:34.316189 | orchestrator | 2026-03-29 02:45:34.316196 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 02:45:34.316203 | orchestrator | Sunday 29 March 2026 02:45:23 +0000 (0:00:01.085) 0:00:01.835 ********** 2026-03-29 02:45:34.316210 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316217 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316223 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316230 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316237 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316243 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316251 | orchestrator | 2026-03-29 02:45:34.316258 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 02:45:34.316265 | orchestrator | Sunday 29 March 2026 02:45:24 +0000 (0:00:01.178) 0:00:03.014 ********** 2026-03-29 02:45:34.316272 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316278 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316285 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316292 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316298 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316305 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316311 | orchestrator | 2026-03-29 02:45:34.316318 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 02:45:34.316325 | orchestrator | Sunday 29 March 2026 02:45:25 +0000 (0:00:00.708) 0:00:03.722 ********** 2026-03-29 02:45:34.316332 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316338 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316345 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316351 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316378 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316386 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316392 | orchestrator | 2026-03-29 02:45:34.316399 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 02:45:34.316406 | orchestrator | Sunday 29 March 2026 02:45:26 +0000 (0:00:00.894) 0:00:04.617 ********** 2026-03-29 02:45:34.316413 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316419 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316426 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316432 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316439 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316445 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316452 | orchestrator | 2026-03-29 02:45:34.316459 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 02:45:34.316465 | orchestrator | Sunday 29 March 2026 02:45:26 +0000 (0:00:00.774) 0:00:05.392 ********** 2026-03-29 02:45:34.316472 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316478 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316485 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316491 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316498 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316504 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316511 | orchestrator | 2026-03-29 02:45:34.316517 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 02:45:34.316524 | orchestrator | Sunday 29 March 2026 02:45:27 +0000 (0:00:00.536) 0:00:05.928 ********** 2026-03-29 02:45:34.316530 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316537 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316543 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316550 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316556 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316563 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316570 | orchestrator | 2026-03-29 02:45:34.316576 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 02:45:34.316583 | orchestrator | Sunday 29 March 2026 02:45:28 +0000 (0:00:00.718) 0:00:06.646 ********** 2026-03-29 02:45:34.316591 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:34.316600 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:34.316608 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:34.316616 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:34.316624 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:34.316631 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:34.316639 | orchestrator | 2026-03-29 02:45:34.316647 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 02:45:34.316655 | orchestrator | Sunday 29 March 2026 02:45:28 +0000 (0:00:00.564) 0:00:07.211 ********** 2026-03-29 02:45:34.316663 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316670 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316678 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316686 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316697 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316724 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316737 | orchestrator | 2026-03-29 02:45:34.316748 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 02:45:34.316758 | orchestrator | Sunday 29 March 2026 02:45:29 +0000 (0:00:00.661) 0:00:07.872 ********** 2026-03-29 02:45:34.316770 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:45:34.316780 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:45:34.316791 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:45:34.316803 | orchestrator | 2026-03-29 02:45:34.316814 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 02:45:34.316827 | orchestrator | Sunday 29 March 2026 02:45:30 +0000 (0:00:00.609) 0:00:08.482 ********** 2026-03-29 02:45:34.316876 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:34.316889 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:34.316901 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:34.316929 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:34.316943 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:34.316955 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:34.316970 | orchestrator | 2026-03-29 02:45:34.316982 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 02:45:34.316991 | orchestrator | Sunday 29 March 2026 02:45:30 +0000 (0:00:00.665) 0:00:09.147 ********** 2026-03-29 02:45:34.316998 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:45:34.317005 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:45:34.317012 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:45:34.317018 | orchestrator | 2026-03-29 02:45:34.317025 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 02:45:34.317032 | orchestrator | Sunday 29 March 2026 02:45:32 +0000 (0:00:02.258) 0:00:11.405 ********** 2026-03-29 02:45:34.317038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 02:45:34.317046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 02:45:34.317053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 02:45:34.317059 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:34.317066 | orchestrator | 2026-03-29 02:45:34.317073 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 02:45:34.317080 | orchestrator | Sunday 29 March 2026 02:45:33 +0000 (0:00:00.372) 0:00:11.777 ********** 2026-03-29 02:45:34.317088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317111 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:34.317118 | orchestrator | 2026-03-29 02:45:34.317125 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 02:45:34.317131 | orchestrator | Sunday 29 March 2026 02:45:33 +0000 (0:00:00.602) 0:00:12.380 ********** 2026-03-29 02:45:34.317140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:34.317169 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:34.317176 | orchestrator | 2026-03-29 02:45:34.317189 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 02:45:34.317195 | orchestrator | Sunday 29 March 2026 02:45:34 +0000 (0:00:00.159) 0:00:12.539 ********** 2026-03-29 02:45:34.317211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 02:45:31.569412', 'end': '2026-03-29 02:45:31.616791', 'delta': '0:00:00.047379', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 02:45:43.064179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 02:45:32.141440', 'end': '2026-03-29 02:45:32.189965', 'delta': '0:00:00.048525', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 02:45:43.064270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 02:45:32.675557', 'end': '2026-03-29 02:45:32.704746', 'delta': '0:00:00.029189', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 02:45:43.064280 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064289 | orchestrator | 2026-03-29 02:45:43.064298 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 02:45:43.064305 | orchestrator | Sunday 29 March 2026 02:45:34 +0000 (0:00:00.169) 0:00:12.709 ********** 2026-03-29 02:45:43.064312 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:43.064318 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:43.064325 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:43.064331 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:43.064338 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:43.064344 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:43.064350 | orchestrator | 2026-03-29 02:45:43.064357 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 02:45:43.064363 | orchestrator | Sunday 29 March 2026 02:45:34 +0000 (0:00:00.688) 0:00:13.397 ********** 2026-03-29 02:45:43.064370 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:45:43.064376 | orchestrator | 2026-03-29 02:45:43.064383 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 02:45:43.064389 | orchestrator | Sunday 29 March 2026 02:45:35 +0000 (0:00:00.969) 0:00:14.367 ********** 2026-03-29 02:45:43.064416 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064423 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.064429 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.064436 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.064442 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.064448 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.064454 | orchestrator | 2026-03-29 02:45:43.064461 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 02:45:43.064467 | orchestrator | Sunday 29 March 2026 02:45:36 +0000 (0:00:00.587) 0:00:14.955 ********** 2026-03-29 02:45:43.064474 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064480 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.064490 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.064500 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.064510 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.064519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.064529 | orchestrator | 2026-03-29 02:45:43.064539 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 02:45:43.064549 | orchestrator | Sunday 29 March 2026 02:45:37 +0000 (0:00:01.012) 0:00:15.967 ********** 2026-03-29 02:45:43.064558 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064569 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.064579 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.064590 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.064598 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.064615 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.064623 | orchestrator | 2026-03-29 02:45:43.064634 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 02:45:43.064644 | orchestrator | Sunday 29 March 2026 02:45:38 +0000 (0:00:00.565) 0:00:16.533 ********** 2026-03-29 02:45:43.064654 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064663 | orchestrator | 2026-03-29 02:45:43.064673 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 02:45:43.064684 | orchestrator | Sunday 29 March 2026 02:45:38 +0000 (0:00:00.121) 0:00:16.655 ********** 2026-03-29 02:45:43.064694 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064705 | orchestrator | 2026-03-29 02:45:43.064716 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 02:45:43.064727 | orchestrator | Sunday 29 March 2026 02:45:38 +0000 (0:00:00.207) 0:00:16.863 ********** 2026-03-29 02:45:43.064739 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064749 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.064761 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.064772 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.064782 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.064793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.064805 | orchestrator | 2026-03-29 02:45:43.064832 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 02:45:43.064871 | orchestrator | Sunday 29 March 2026 02:45:39 +0000 (0:00:00.614) 0:00:17.478 ********** 2026-03-29 02:45:43.064883 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064894 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.064904 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.064915 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.064925 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.064936 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.064947 | orchestrator | 2026-03-29 02:45:43.064958 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 02:45:43.064969 | orchestrator | Sunday 29 March 2026 02:45:39 +0000 (0:00:00.569) 0:00:18.047 ********** 2026-03-29 02:45:43.064980 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.064990 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.065001 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.065022 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.065034 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.065044 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.065055 | orchestrator | 2026-03-29 02:45:43.065066 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 02:45:43.065076 | orchestrator | Sunday 29 March 2026 02:45:40 +0000 (0:00:00.747) 0:00:18.795 ********** 2026-03-29 02:45:43.065088 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.065098 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.065109 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.065116 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.065122 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.065128 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.065134 | orchestrator | 2026-03-29 02:45:43.065141 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 02:45:43.065147 | orchestrator | Sunday 29 March 2026 02:45:40 +0000 (0:00:00.537) 0:00:19.333 ********** 2026-03-29 02:45:43.065153 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.065159 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.065165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.065171 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.065177 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.065183 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.065189 | orchestrator | 2026-03-29 02:45:43.065196 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 02:45:43.065202 | orchestrator | Sunday 29 March 2026 02:45:41 +0000 (0:00:00.689) 0:00:20.022 ********** 2026-03-29 02:45:43.065208 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.065218 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.065231 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.065246 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.065256 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.065266 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.065275 | orchestrator | 2026-03-29 02:45:43.065286 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 02:45:43.065296 | orchestrator | Sunday 29 March 2026 02:45:42 +0000 (0:00:00.527) 0:00:20.550 ********** 2026-03-29 02:45:43.065305 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.065316 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.065326 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.065336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.065346 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.065357 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:43.065368 | orchestrator | 2026-03-29 02:45:43.065378 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 02:45:43.065388 | orchestrator | Sunday 29 March 2026 02:45:42 +0000 (0:00:00.689) 0:00:21.239 ********** 2026-03-29 02:45:43.065400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.065420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.065448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.074793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.074808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.074819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.154320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.154378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.283988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.284079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284167 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:43.284176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.284240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.284270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.284293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.492247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.492361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.492418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.492533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.492545 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:43.492556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.492583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608331 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:43.608367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.608391 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:43.608404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:43.608417 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:43.608429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:43.608562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:45:44.294245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:44.294375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:45:44.294400 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:44.294420 | orchestrator | 2026-03-29 02:45:44.294440 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 02:45:44.294459 | orchestrator | Sunday 29 March 2026 02:45:43 +0000 (0:00:00.880) 0:00:22.120 ********** 2026-03-29 02:45:44.294478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.294637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334266 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334333 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.334371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.456684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.456837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.456933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.456953 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457034 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:44.457057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.457195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652667 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652910 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:44.652920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.652948 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717559 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.717788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858419 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858516 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858539 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858571 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:44.858578 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858584 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858590 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858595 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858600 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858609 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:44.858623 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111533 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111579 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111593 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:45.111604 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:45.111630 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111640 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111650 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111656 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111661 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111680 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:45.111697 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:52.739746 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:52.739964 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:45:52.739986 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.739994 | orchestrator | 2026-03-29 02:45:52.740003 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 02:45:52.740010 | orchestrator | Sunday 29 March 2026 02:45:45 +0000 (0:00:01.389) 0:00:23.509 ********** 2026-03-29 02:45:52.740017 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:52.740024 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:52.740030 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:52.740036 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:52.740042 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:52.740048 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:52.740054 | orchestrator | 2026-03-29 02:45:52.740060 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 02:45:52.740066 | orchestrator | Sunday 29 March 2026 02:45:46 +0000 (0:00:00.963) 0:00:24.473 ********** 2026-03-29 02:45:52.740072 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:45:52.740078 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:45:52.740083 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:45:52.740089 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:45:52.740096 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:45:52.740101 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:45:52.740107 | orchestrator | 2026-03-29 02:45:52.740113 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 02:45:52.740119 | orchestrator | Sunday 29 March 2026 02:45:46 +0000 (0:00:00.868) 0:00:25.341 ********** 2026-03-29 02:45:52.740125 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:52.740130 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:52.740136 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:52.740158 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:52.740166 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:52.740172 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.740179 | orchestrator | 2026-03-29 02:45:52.740185 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 02:45:52.740192 | orchestrator | Sunday 29 March 2026 02:45:47 +0000 (0:00:00.595) 0:00:25.937 ********** 2026-03-29 02:45:52.740198 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:52.740204 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:52.740210 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:52.740216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:52.740222 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:52.740225 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.740229 | orchestrator | 2026-03-29 02:45:52.740233 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 02:45:52.740237 | orchestrator | Sunday 29 March 2026 02:45:48 +0000 (0:00:00.832) 0:00:26.770 ********** 2026-03-29 02:45:52.740241 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:52.740244 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:52.740248 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:52.740260 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:52.740263 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:52.740267 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.740271 | orchestrator | 2026-03-29 02:45:52.740275 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 02:45:52.740278 | orchestrator | Sunday 29 March 2026 02:45:49 +0000 (0:00:00.666) 0:00:27.436 ********** 2026-03-29 02:45:52.740282 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:52.740295 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:52.740299 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:52.740308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:52.740313 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:52.740318 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.740322 | orchestrator | 2026-03-29 02:45:52.740326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 02:45:52.740331 | orchestrator | Sunday 29 March 2026 02:45:49 +0000 (0:00:00.943) 0:00:28.379 ********** 2026-03-29 02:45:52.740335 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 02:45:52.740341 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 02:45:52.740345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 02:45:52.740349 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 02:45:52.740354 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 02:45:52.740358 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 02:45:52.740362 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 02:45:52.740367 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 02:45:52.740371 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-29 02:45:52.740375 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 02:45:52.740380 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 02:45:52.740384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 02:45:52.740388 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-29 02:45:52.740393 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-29 02:45:52.740397 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 02:45:52.740401 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-29 02:45:52.740405 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-29 02:45:52.740415 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-29 02:45:52.740420 | orchestrator | 2026-03-29 02:45:52.740424 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 02:45:52.740429 | orchestrator | Sunday 29 March 2026 02:45:51 +0000 (0:00:01.790) 0:00:30.169 ********** 2026-03-29 02:45:52.740434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 02:45:52.740439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 02:45:52.740443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 02:45:52.740448 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:45:52.740452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 02:45:52.740456 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 02:45:52.740461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 02:45:52.740465 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:45:52.740470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 02:45:52.740474 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 02:45:52.740479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 02:45:52.740483 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:45:52.740487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 02:45:52.740492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 02:45:52.740499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 02:45:52.740504 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:45:52.740508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 02:45:52.740513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 02:45:52.740517 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 02:45:52.740522 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:45:52.740528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 02:45:52.740534 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 02:45:52.740538 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 02:45:52.740543 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:45:52.740547 | orchestrator | 2026-03-29 02:45:52.740551 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 02:45:52.740560 | orchestrator | Sunday 29 March 2026 02:45:52 +0000 (0:00:00.974) 0:00:31.144 ********** 2026-03-29 02:46:11.178599 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.178738 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.178755 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.178768 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:46:11.178780 | orchestrator | 2026-03-29 02:46:11.178792 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 02:46:11.178805 | orchestrator | Sunday 29 March 2026 02:45:53 +0000 (0:00:01.086) 0:00:32.230 ********** 2026-03-29 02:46:11.178817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.178828 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.178839 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.178850 | orchestrator | 2026-03-29 02:46:11.178886 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 02:46:11.178961 | orchestrator | Sunday 29 March 2026 02:45:54 +0000 (0:00:00.347) 0:00:32.578 ********** 2026-03-29 02:46:11.178974 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.178985 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.178996 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.179007 | orchestrator | 2026-03-29 02:46:11.179018 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 02:46:11.179030 | orchestrator | Sunday 29 March 2026 02:45:54 +0000 (0:00:00.340) 0:00:32.918 ********** 2026-03-29 02:46:11.179040 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.179051 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.179062 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.179073 | orchestrator | 2026-03-29 02:46:11.179084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 02:46:11.179095 | orchestrator | Sunday 29 March 2026 02:45:55 +0000 (0:00:00.553) 0:00:33.471 ********** 2026-03-29 02:46:11.179106 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.179118 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:11.179128 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:11.179139 | orchestrator | 2026-03-29 02:46:11.179150 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 02:46:11.179161 | orchestrator | Sunday 29 March 2026 02:45:55 +0000 (0:00:00.464) 0:00:33.936 ********** 2026-03-29 02:46:11.179172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:46:11.179183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:46:11.179194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:46:11.179205 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.179216 | orchestrator | 2026-03-29 02:46:11.179227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 02:46:11.179238 | orchestrator | Sunday 29 March 2026 02:45:55 +0000 (0:00:00.405) 0:00:34.341 ********** 2026-03-29 02:46:11.179273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:46:11.179285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:46:11.179296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:46:11.179306 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.179317 | orchestrator | 2026-03-29 02:46:11.179328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 02:46:11.179339 | orchestrator | Sunday 29 March 2026 02:45:56 +0000 (0:00:00.395) 0:00:34.737 ********** 2026-03-29 02:46:11.179363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:46:11.179374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:46:11.179385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:46:11.179396 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.179406 | orchestrator | 2026-03-29 02:46:11.179417 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 02:46:11.179428 | orchestrator | Sunday 29 March 2026 02:45:56 +0000 (0:00:00.409) 0:00:35.147 ********** 2026-03-29 02:46:11.179439 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.179450 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:11.179461 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:11.179471 | orchestrator | 2026-03-29 02:46:11.179482 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 02:46:11.179493 | orchestrator | Sunday 29 March 2026 02:45:57 +0000 (0:00:00.335) 0:00:35.483 ********** 2026-03-29 02:46:11.179504 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 02:46:11.179515 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 02:46:11.179526 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 02:46:11.179537 | orchestrator | 2026-03-29 02:46:11.179548 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 02:46:11.179559 | orchestrator | Sunday 29 March 2026 02:45:58 +0000 (0:00:01.114) 0:00:36.597 ********** 2026-03-29 02:46:11.179570 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:46:11.179581 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:46:11.179592 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:46:11.179603 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 02:46:11.179614 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 02:46:11.179624 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 02:46:11.179635 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 02:46:11.179646 | orchestrator | 2026-03-29 02:46:11.179671 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 02:46:11.179693 | orchestrator | Sunday 29 March 2026 02:45:59 +0000 (0:00:00.850) 0:00:37.448 ********** 2026-03-29 02:46:11.179722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:46:11.179734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:46:11.179745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:46:11.179756 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 02:46:11.179767 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 02:46:11.179777 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 02:46:11.179788 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 02:46:11.179799 | orchestrator | 2026-03-29 02:46:11.179810 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:46:11.179829 | orchestrator | Sunday 29 March 2026 02:46:01 +0000 (0:00:01.970) 0:00:39.419 ********** 2026-03-29 02:46:11.179841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:46:11.179853 | orchestrator | 2026-03-29 02:46:11.179923 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:46:11.179935 | orchestrator | Sunday 29 March 2026 02:46:02 +0000 (0:00:01.281) 0:00:40.700 ********** 2026-03-29 02:46:11.179946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:46:11.179957 | orchestrator | 2026-03-29 02:46:11.179968 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:46:11.179979 | orchestrator | Sunday 29 March 2026 02:46:03 +0000 (0:00:01.275) 0:00:41.976 ********** 2026-03-29 02:46:11.179990 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.180001 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.180012 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.180023 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:11.180033 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:11.180044 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:11.180055 | orchestrator | 2026-03-29 02:46:11.180066 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:46:11.180077 | orchestrator | Sunday 29 March 2026 02:46:04 +0000 (0:00:01.245) 0:00:43.222 ********** 2026-03-29 02:46:11.180088 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.180099 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.180110 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.180120 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:11.180131 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.180142 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:11.180153 | orchestrator | 2026-03-29 02:46:11.180164 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:46:11.180175 | orchestrator | Sunday 29 March 2026 02:46:05 +0000 (0:00:00.742) 0:00:43.965 ********** 2026-03-29 02:46:11.180185 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.180196 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:11.180207 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.180218 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:11.180229 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.180240 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.180250 | orchestrator | 2026-03-29 02:46:11.180268 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:46:11.180279 | orchestrator | Sunday 29 March 2026 02:46:06 +0000 (0:00:00.969) 0:00:44.934 ********** 2026-03-29 02:46:11.180290 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.180301 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.180311 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.180322 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:11.180333 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.180344 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:11.180355 | orchestrator | 2026-03-29 02:46:11.180366 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:46:11.180377 | orchestrator | Sunday 29 March 2026 02:46:07 +0000 (0:00:00.798) 0:00:45.732 ********** 2026-03-29 02:46:11.180387 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.180398 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.180409 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.180420 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:11.180431 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:11.180442 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:11.180452 | orchestrator | 2026-03-29 02:46:11.180464 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:46:11.180482 | orchestrator | Sunday 29 March 2026 02:46:08 +0000 (0:00:01.300) 0:00:47.033 ********** 2026-03-29 02:46:11.180493 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.180504 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.180515 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.180526 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.180536 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.180548 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.180558 | orchestrator | 2026-03-29 02:46:11.180569 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:46:11.180580 | orchestrator | Sunday 29 March 2026 02:46:09 +0000 (0:00:00.641) 0:00:47.674 ********** 2026-03-29 02:46:11.180591 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:11.180602 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:11.180613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:11.180624 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:11.180634 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:11.180645 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:11.180656 | orchestrator | 2026-03-29 02:46:11.180667 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:46:11.180678 | orchestrator | Sunday 29 March 2026 02:46:10 +0000 (0:00:00.867) 0:00:48.542 ********** 2026-03-29 02:46:11.180689 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:11.180708 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.049795 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.049970 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.049988 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.049997 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050006 | orchestrator | 2026-03-29 02:46:30.050061 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:46:30.050071 | orchestrator | Sunday 29 March 2026 02:46:11 +0000 (0:00:01.032) 0:00:49.574 ********** 2026-03-29 02:46:30.050076 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050081 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050087 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050092 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.050097 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.050103 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050108 | orchestrator | 2026-03-29 02:46:30.050114 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:46:30.050120 | orchestrator | Sunday 29 March 2026 02:46:12 +0000 (0:00:01.425) 0:00:51.000 ********** 2026-03-29 02:46:30.050125 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.050131 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.050136 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.050142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050148 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050153 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050158 | orchestrator | 2026-03-29 02:46:30.050163 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:46:30.050203 | orchestrator | Sunday 29 March 2026 02:46:13 +0000 (0:00:00.626) 0:00:51.626 ********** 2026-03-29 02:46:30.050209 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.050214 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.050219 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.050225 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.050230 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.050235 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050240 | orchestrator | 2026-03-29 02:46:30.050245 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:46:30.050250 | orchestrator | Sunday 29 March 2026 02:46:14 +0000 (0:00:00.920) 0:00:52.547 ********** 2026-03-29 02:46:30.050256 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050261 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050284 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050290 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050295 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050300 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050305 | orchestrator | 2026-03-29 02:46:30.050310 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:46:30.050316 | orchestrator | Sunday 29 March 2026 02:46:14 +0000 (0:00:00.684) 0:00:53.231 ********** 2026-03-29 02:46:30.050321 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050326 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050331 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050338 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050350 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050356 | orchestrator | 2026-03-29 02:46:30.050362 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:46:30.050368 | orchestrator | Sunday 29 March 2026 02:46:15 +0000 (0:00:00.885) 0:00:54.117 ********** 2026-03-29 02:46:30.050374 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050380 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050386 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050392 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050398 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050415 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050421 | orchestrator | 2026-03-29 02:46:30.050426 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:46:30.050432 | orchestrator | Sunday 29 March 2026 02:46:16 +0000 (0:00:00.625) 0:00:54.742 ********** 2026-03-29 02:46:30.050437 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.050442 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.050457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.050462 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050474 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050479 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050484 | orchestrator | 2026-03-29 02:46:30.050490 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:46:30.050495 | orchestrator | Sunday 29 March 2026 02:46:17 +0000 (0:00:00.862) 0:00:55.604 ********** 2026-03-29 02:46:30.050500 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.050505 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.050510 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.050515 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.050520 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.050525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.050530 | orchestrator | 2026-03-29 02:46:30.050536 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:46:30.050541 | orchestrator | Sunday 29 March 2026 02:46:17 +0000 (0:00:00.591) 0:00:56.196 ********** 2026-03-29 02:46:30.050546 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.050551 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.050556 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.050561 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.050566 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.050571 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050576 | orchestrator | 2026-03-29 02:46:30.050581 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:46:30.050587 | orchestrator | Sunday 29 March 2026 02:46:18 +0000 (0:00:00.936) 0:00:57.133 ********** 2026-03-29 02:46:30.050592 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050597 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050602 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050615 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.050686 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.050692 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050705 | orchestrator | 2026-03-29 02:46:30.050710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:46:30.050716 | orchestrator | Sunday 29 March 2026 02:46:19 +0000 (0:00:00.918) 0:00:58.051 ********** 2026-03-29 02:46:30.050721 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:46:30.050741 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:46:30.050747 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:46:30.050752 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:46:30.050757 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:46:30.050762 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:46:30.050767 | orchestrator | 2026-03-29 02:46:30.050772 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-29 02:46:30.050778 | orchestrator | Sunday 29 March 2026 02:46:20 +0000 (0:00:01.349) 0:00:59.401 ********** 2026-03-29 02:46:30.050783 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:46:30.050788 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:46:30.050793 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:46:30.050800 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:46:30.050808 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:46:30.050818 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:46:30.050831 | orchestrator | 2026-03-29 02:46:30.050839 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-29 02:46:30.050846 | orchestrator | Sunday 29 March 2026 02:46:22 +0000 (0:00:01.566) 0:01:00.967 ********** 2026-03-29 02:46:30.050855 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:46:30.050862 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:46:30.050869 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:46:30.050921 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:46:30.050930 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:46:30.050937 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:46:30.050945 | orchestrator | 2026-03-29 02:46:30.050954 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-29 02:46:30.050962 | orchestrator | Sunday 29 March 2026 02:46:24 +0000 (0:00:02.232) 0:01:03.199 ********** 2026-03-29 02:46:30.050973 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:46:30.050982 | orchestrator | 2026-03-29 02:46:30.050991 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-29 02:46:30.050997 | orchestrator | Sunday 29 March 2026 02:46:25 +0000 (0:00:01.047) 0:01:04.246 ********** 2026-03-29 02:46:30.051003 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.051008 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.051012 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.051017 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.051023 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.051028 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.051032 | orchestrator | 2026-03-29 02:46:30.051038 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-29 02:46:30.051043 | orchestrator | Sunday 29 March 2026 02:46:26 +0000 (0:00:00.541) 0:01:04.787 ********** 2026-03-29 02:46:30.051078 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.051085 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.051090 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.051095 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.051101 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.051106 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.051111 | orchestrator | 2026-03-29 02:46:30.051116 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-29 02:46:30.051121 | orchestrator | Sunday 29 March 2026 02:46:27 +0000 (0:00:00.691) 0:01:05.479 ********** 2026-03-29 02:46:30.051126 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051138 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051150 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051155 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051160 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051165 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 02:46:30.051170 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051176 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051182 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051187 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051192 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051197 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 02:46:30.051204 | orchestrator | 2026-03-29 02:46:30.051213 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-29 02:46:30.051221 | orchestrator | Sunday 29 March 2026 02:46:28 +0000 (0:00:01.296) 0:01:06.775 ********** 2026-03-29 02:46:30.051229 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:46:30.051237 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:46:30.051245 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:46:30.051254 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:46:30.051263 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:46:30.051272 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:46:30.051280 | orchestrator | 2026-03-29 02:46:30.051289 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-29 02:46:30.051297 | orchestrator | Sunday 29 March 2026 02:46:29 +0000 (0:00:01.095) 0:01:07.870 ********** 2026-03-29 02:46:30.051302 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:46:30.051307 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:46:30.051312 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:46:30.051317 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:46:30.051322 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:46:30.051327 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:46:30.051332 | orchestrator | 2026-03-29 02:46:30.051344 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-29 02:47:37.656480 | orchestrator | Sunday 29 March 2026 02:46:30 +0000 (0:00:00.578) 0:01:08.449 ********** 2026-03-29 02:47:37.656566 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656574 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656579 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656583 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656587 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656591 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656595 | orchestrator | 2026-03-29 02:47:37.656600 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-29 02:47:37.656604 | orchestrator | Sunday 29 March 2026 02:46:30 +0000 (0:00:00.708) 0:01:09.157 ********** 2026-03-29 02:47:37.656608 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656612 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656616 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656620 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656624 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656627 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656631 | orchestrator | 2026-03-29 02:47:37.656635 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-29 02:47:37.656638 | orchestrator | Sunday 29 March 2026 02:46:31 +0000 (0:00:00.534) 0:01:09.691 ********** 2026-03-29 02:47:37.656657 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:47:37.656663 | orchestrator | 2026-03-29 02:47:37.656667 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-29 02:47:37.656670 | orchestrator | Sunday 29 March 2026 02:46:32 +0000 (0:00:01.108) 0:01:10.800 ********** 2026-03-29 02:47:37.656674 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:47:37.656679 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:47:37.656683 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:47:37.656686 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:47:37.656690 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:47:37.656694 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:47:37.656697 | orchestrator | 2026-03-29 02:47:37.656701 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-29 02:47:37.656705 | orchestrator | Sunday 29 March 2026 02:47:24 +0000 (0:00:52.022) 0:02:02.822 ********** 2026-03-29 02:47:37.656709 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656713 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656717 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656720 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656724 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656728 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656732 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656736 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656739 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656751 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656765 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656769 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656773 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656776 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656780 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656784 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656788 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656791 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656795 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656802 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 02:47:37.656806 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 02:47:37.656810 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 02:47:37.656813 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656817 | orchestrator | 2026-03-29 02:47:37.656821 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-29 02:47:37.656825 | orchestrator | Sunday 29 March 2026 02:47:25 +0000 (0:00:00.666) 0:02:03.489 ********** 2026-03-29 02:47:37.656828 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656832 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656836 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656840 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656844 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656852 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656856 | orchestrator | 2026-03-29 02:47:37.656860 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-29 02:47:37.656864 | orchestrator | Sunday 29 March 2026 02:47:25 +0000 (0:00:00.873) 0:02:04.363 ********** 2026-03-29 02:47:37.656868 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656871 | orchestrator | 2026-03-29 02:47:37.656875 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-29 02:47:37.656879 | orchestrator | Sunday 29 March 2026 02:47:26 +0000 (0:00:00.160) 0:02:04.524 ********** 2026-03-29 02:47:37.656883 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656897 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656901 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656904 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656908 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656912 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656933 | orchestrator | 2026-03-29 02:47:37.656938 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-29 02:47:37.656941 | orchestrator | Sunday 29 March 2026 02:47:26 +0000 (0:00:00.613) 0:02:05.137 ********** 2026-03-29 02:47:37.656945 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656949 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656953 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656956 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656960 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656964 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.656967 | orchestrator | 2026-03-29 02:47:37.656971 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-29 02:47:37.656975 | orchestrator | Sunday 29 March 2026 02:47:27 +0000 (0:00:00.931) 0:02:06.068 ********** 2026-03-29 02:47:37.656978 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.656982 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.656986 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.656990 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.656993 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.656997 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657001 | orchestrator | 2026-03-29 02:47:37.657004 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-29 02:47:37.657008 | orchestrator | Sunday 29 March 2026 02:47:28 +0000 (0:00:00.673) 0:02:06.742 ********** 2026-03-29 02:47:37.657012 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:47:37.657016 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:47:37.657019 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:47:37.657023 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:47:37.657027 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:47:37.657031 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:47:37.657034 | orchestrator | 2026-03-29 02:47:37.657038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-29 02:47:37.657042 | orchestrator | Sunday 29 March 2026 02:47:31 +0000 (0:00:03.316) 0:02:10.059 ********** 2026-03-29 02:47:37.657045 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:47:37.657049 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:47:37.657053 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:47:37.657057 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:47:37.657060 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:47:37.657064 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:47:37.657068 | orchestrator | 2026-03-29 02:47:37.657071 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-29 02:47:37.657075 | orchestrator | Sunday 29 March 2026 02:47:32 +0000 (0:00:00.615) 0:02:10.674 ********** 2026-03-29 02:47:37.657080 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:47:37.657085 | orchestrator | 2026-03-29 02:47:37.657089 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-29 02:47:37.657096 | orchestrator | Sunday 29 March 2026 02:47:33 +0000 (0:00:01.371) 0:02:12.045 ********** 2026-03-29 02:47:37.657100 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.657104 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.657107 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.657111 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.657118 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.657121 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657125 | orchestrator | 2026-03-29 02:47:37.657129 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-29 02:47:37.657133 | orchestrator | Sunday 29 March 2026 02:47:34 +0000 (0:00:00.893) 0:02:12.938 ********** 2026-03-29 02:47:37.657136 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.657140 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.657144 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.657147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.657151 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.657155 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657159 | orchestrator | 2026-03-29 02:47:37.657163 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-29 02:47:37.657166 | orchestrator | Sunday 29 March 2026 02:47:35 +0000 (0:00:00.620) 0:02:13.559 ********** 2026-03-29 02:47:37.657170 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.657174 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.657177 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.657181 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.657185 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.657189 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657192 | orchestrator | 2026-03-29 02:47:37.657196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-29 02:47:37.657200 | orchestrator | Sunday 29 March 2026 02:47:36 +0000 (0:00:00.939) 0:02:14.499 ********** 2026-03-29 02:47:37.657204 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.657207 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.657211 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.657215 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.657218 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.657222 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657226 | orchestrator | 2026-03-29 02:47:37.657229 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-29 02:47:37.657233 | orchestrator | Sunday 29 March 2026 02:47:36 +0000 (0:00:00.619) 0:02:15.119 ********** 2026-03-29 02:47:37.657237 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:37.657241 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:37.657244 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:37.657248 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:37.657252 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:37.657255 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:37.657259 | orchestrator | 2026-03-29 02:47:37.657263 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-29 02:47:37.657269 | orchestrator | Sunday 29 March 2026 02:47:37 +0000 (0:00:00.937) 0:02:16.056 ********** 2026-03-29 02:47:49.659206 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:49.659307 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:49.659320 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:49.659330 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:49.659339 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:49.659347 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:49.659356 | orchestrator | 2026-03-29 02:47:49.659367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-29 02:47:49.659377 | orchestrator | Sunday 29 March 2026 02:47:38 +0000 (0:00:00.679) 0:02:16.736 ********** 2026-03-29 02:47:49.659407 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:49.659416 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:49.659425 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:49.659434 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:49.659443 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:49.659452 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:49.659460 | orchestrator | 2026-03-29 02:47:49.659469 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-29 02:47:49.659478 | orchestrator | Sunday 29 March 2026 02:47:39 +0000 (0:00:00.929) 0:02:17.666 ********** 2026-03-29 02:47:49.659486 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:47:49.659495 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:47:49.659503 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:47:49.659512 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:47:49.659521 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:47:49.659529 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:47:49.659538 | orchestrator | 2026-03-29 02:47:49.659546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-29 02:47:49.659555 | orchestrator | Sunday 29 March 2026 02:47:39 +0000 (0:00:00.620) 0:02:18.286 ********** 2026-03-29 02:47:49.659563 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:47:49.659573 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:47:49.659581 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:47:49.659590 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:47:49.659598 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:47:49.659607 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:47:49.659615 | orchestrator | 2026-03-29 02:47:49.659624 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-29 02:47:49.659632 | orchestrator | Sunday 29 March 2026 02:47:41 +0000 (0:00:01.706) 0:02:19.993 ********** 2026-03-29 02:47:49.659642 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:47:49.659652 | orchestrator | 2026-03-29 02:47:49.659661 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-29 02:47:49.659670 | orchestrator | Sunday 29 March 2026 02:47:42 +0000 (0:00:01.293) 0:02:21.287 ********** 2026-03-29 02:47:49.659678 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-29 02:47:49.659687 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-29 02:47:49.659696 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-29 02:47:49.659704 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-29 02:47:49.659713 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659721 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659730 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-29 02:47:49.659750 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-29 02:47:49.659761 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659781 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659791 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659801 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659812 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659821 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-29 02:47:49.659832 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659851 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659861 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659878 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659888 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-29 02:47:49.659898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.659908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659918 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.659958 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.659967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-29 02:47:49.659975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.659984 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.659992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.660001 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.660009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.660018 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-29 02:47:49.660026 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660035 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.660057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.660066 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660075 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-29 02:47:49.660092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660101 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660118 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660127 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660135 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-29 02:47:49.660144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660152 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660161 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660169 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660178 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660186 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-29 02:47:49.660195 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660212 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660229 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660238 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 02:47:49.660246 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660263 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660272 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660286 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660295 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 02:47:49.660304 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660312 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660321 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660329 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660342 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 02:47:49.660359 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660376 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660385 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660393 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 02:47:49.660410 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660419 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660427 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660436 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660445 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660453 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 02:47:49.660462 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-29 02:47:49.660470 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-29 02:47:49.660479 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660488 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-29 02:47:49.660496 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660505 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 02:47:49.660514 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-29 02:47:49.660522 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-29 02:47:49.660531 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-29 02:47:49.660540 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-29 02:47:49.660553 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-29 02:48:05.423736 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-29 02:48:05.423830 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-29 02:48:05.423841 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-29 02:48:05.423849 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-29 02:48:05.423857 | orchestrator | 2026-03-29 02:48:05.423865 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-29 02:48:05.423875 | orchestrator | Sunday 29 March 2026 02:47:49 +0000 (0:00:06.724) 0:02:28.011 ********** 2026-03-29 02:48:05.423882 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.423891 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.423898 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.423906 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:48:05.424027 | orchestrator | 2026-03-29 02:48:05.424041 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-29 02:48:05.424048 | orchestrator | Sunday 29 March 2026 02:47:50 +0000 (0:00:01.087) 0:02:29.099 ********** 2026-03-29 02:48:05.424056 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424065 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424072 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424079 | orchestrator | 2026-03-29 02:48:05.424086 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-29 02:48:05.424094 | orchestrator | Sunday 29 March 2026 02:47:51 +0000 (0:00:00.754) 0:02:29.853 ********** 2026-03-29 02:48:05.424101 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424108 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424116 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.424123 | orchestrator | 2026-03-29 02:48:05.424130 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-29 02:48:05.424137 | orchestrator | Sunday 29 March 2026 02:47:52 +0000 (0:00:01.235) 0:02:31.088 ********** 2026-03-29 02:48:05.424144 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:05.424152 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:05.424159 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:05.424166 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424180 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424188 | orchestrator | 2026-03-29 02:48:05.424195 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-29 02:48:05.424214 | orchestrator | Sunday 29 March 2026 02:47:53 +0000 (0:00:00.890) 0:02:31.979 ********** 2026-03-29 02:48:05.424221 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:05.424229 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:05.424241 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:05.424255 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424272 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424285 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424297 | orchestrator | 2026-03-29 02:48:05.424308 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-29 02:48:05.424321 | orchestrator | Sunday 29 March 2026 02:47:54 +0000 (0:00:00.660) 0:02:32.640 ********** 2026-03-29 02:48:05.424333 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424344 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424355 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424380 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424393 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424406 | orchestrator | 2026-03-29 02:48:05.424418 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-29 02:48:05.424430 | orchestrator | Sunday 29 March 2026 02:47:55 +0000 (0:00:00.904) 0:02:33.545 ********** 2026-03-29 02:48:05.424443 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424457 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424470 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424484 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424497 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424534 | orchestrator | 2026-03-29 02:48:05.424548 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-29 02:48:05.424561 | orchestrator | Sunday 29 March 2026 02:47:55 +0000 (0:00:00.658) 0:02:34.204 ********** 2026-03-29 02:48:05.424575 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424587 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424600 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424613 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424625 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424637 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424649 | orchestrator | 2026-03-29 02:48:05.424661 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-29 02:48:05.424674 | orchestrator | Sunday 29 March 2026 02:47:56 +0000 (0:00:00.897) 0:02:35.102 ********** 2026-03-29 02:48:05.424686 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424698 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424722 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424754 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424767 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424779 | orchestrator | 2026-03-29 02:48:05.424791 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-29 02:48:05.424804 | orchestrator | Sunday 29 March 2026 02:47:57 +0000 (0:00:00.607) 0:02:35.709 ********** 2026-03-29 02:48:05.424816 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424829 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424841 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424852 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424864 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.424877 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.424889 | orchestrator | 2026-03-29 02:48:05.424900 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-29 02:48:05.424912 | orchestrator | Sunday 29 March 2026 02:47:58 +0000 (0:00:00.891) 0:02:36.601 ********** 2026-03-29 02:48:05.424925 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.424962 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.424975 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.424987 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.424997 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425009 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425021 | orchestrator | 2026-03-29 02:48:05.425034 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-29 02:48:05.425046 | orchestrator | Sunday 29 March 2026 02:47:58 +0000 (0:00:00.626) 0:02:37.227 ********** 2026-03-29 02:48:05.425059 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.425071 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425083 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425096 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:05.425108 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:05.425121 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:05.425133 | orchestrator | 2026-03-29 02:48:05.425146 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-29 02:48:05.425157 | orchestrator | Sunday 29 March 2026 02:48:01 +0000 (0:00:03.175) 0:02:40.402 ********** 2026-03-29 02:48:05.425170 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:05.425183 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:05.425195 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:05.425207 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.425220 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425232 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425244 | orchestrator | 2026-03-29 02:48:05.425257 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-29 02:48:05.425279 | orchestrator | Sunday 29 March 2026 02:48:02 +0000 (0:00:00.617) 0:02:41.020 ********** 2026-03-29 02:48:05.425291 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:05.425304 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:05.425316 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:05.425327 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.425339 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425352 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425365 | orchestrator | 2026-03-29 02:48:05.425376 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-29 02:48:05.425389 | orchestrator | Sunday 29 March 2026 02:48:03 +0000 (0:00:00.961) 0:02:41.981 ********** 2026-03-29 02:48:05.425402 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.425413 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:05.425425 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:05.425446 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.425460 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425472 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425486 | orchestrator | 2026-03-29 02:48:05.425499 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-29 02:48:05.425511 | orchestrator | Sunday 29 March 2026 02:48:04 +0000 (0:00:00.629) 0:02:42.611 ********** 2026-03-29 02:48:05.425523 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.425536 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.425548 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 02:48:05.425561 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:05.425573 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:05.425585 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:05.425597 | orchestrator | 2026-03-29 02:48:05.425610 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-29 02:48:05.425623 | orchestrator | Sunday 29 March 2026 02:48:05 +0000 (0:00:00.979) 0:02:43.591 ********** 2026-03-29 02:48:05.425640 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-29 02:48:05.425656 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-29 02:48:05.425670 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:05.425690 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-29 02:48:24.443272 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-29 02:48:24.443362 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443371 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-29 02:48:24.443419 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-29 02:48:24.443425 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443429 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443434 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443441 | orchestrator | 2026-03-29 02:48:24.443446 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-29 02:48:24.443452 | orchestrator | Sunday 29 March 2026 02:48:06 +0000 (0:00:00.994) 0:02:44.585 ********** 2026-03-29 02:48:24.443455 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443459 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443463 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443467 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443471 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443474 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443478 | orchestrator | 2026-03-29 02:48:24.443482 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-29 02:48:24.443486 | orchestrator | Sunday 29 March 2026 02:48:06 +0000 (0:00:00.665) 0:02:45.251 ********** 2026-03-29 02:48:24.443489 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443495 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443503 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443507 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443511 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443515 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443518 | orchestrator | 2026-03-29 02:48:24.443523 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 02:48:24.443528 | orchestrator | Sunday 29 March 2026 02:48:07 +0000 (0:00:00.934) 0:02:46.185 ********** 2026-03-29 02:48:24.443542 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443546 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443550 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443553 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443557 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443561 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443564 | orchestrator | 2026-03-29 02:48:24.443569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 02:48:24.443573 | orchestrator | Sunday 29 March 2026 02:48:08 +0000 (0:00:00.692) 0:02:46.878 ********** 2026-03-29 02:48:24.443576 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443580 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443584 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443587 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443591 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443595 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443599 | orchestrator | 2026-03-29 02:48:24.443602 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 02:48:24.443606 | orchestrator | Sunday 29 March 2026 02:48:09 +0000 (0:00:00.945) 0:02:47.823 ********** 2026-03-29 02:48:24.443610 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443614 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.443617 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.443621 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443625 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443628 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443637 | orchestrator | 2026-03-29 02:48:24.443641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 02:48:24.443644 | orchestrator | Sunday 29 March 2026 02:48:10 +0000 (0:00:00.652) 0:02:48.475 ********** 2026-03-29 02:48:24.443648 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:24.443652 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:24.443656 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:24.443660 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443663 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443667 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443671 | orchestrator | 2026-03-29 02:48:24.443675 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 02:48:24.443678 | orchestrator | Sunday 29 March 2026 02:48:11 +0000 (0:00:00.939) 0:02:49.415 ********** 2026-03-29 02:48:24.443682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:24.443686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:24.443690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:24.443694 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443697 | orchestrator | 2026-03-29 02:48:24.443701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 02:48:24.443705 | orchestrator | Sunday 29 March 2026 02:48:11 +0000 (0:00:00.430) 0:02:49.845 ********** 2026-03-29 02:48:24.443720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:24.443724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:24.443728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:24.443732 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443736 | orchestrator | 2026-03-29 02:48:24.443739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 02:48:24.443743 | orchestrator | Sunday 29 March 2026 02:48:11 +0000 (0:00:00.453) 0:02:50.299 ********** 2026-03-29 02:48:24.443747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:24.443751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:24.443754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:24.443758 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443762 | orchestrator | 2026-03-29 02:48:24.443766 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 02:48:24.443769 | orchestrator | Sunday 29 March 2026 02:48:12 +0000 (0:00:00.465) 0:02:50.764 ********** 2026-03-29 02:48:24.443773 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:24.443777 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:24.443781 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:24.443784 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443788 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443792 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443795 | orchestrator | 2026-03-29 02:48:24.443799 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 02:48:24.443803 | orchestrator | Sunday 29 March 2026 02:48:13 +0000 (0:00:00.653) 0:02:51.417 ********** 2026-03-29 02:48:24.443807 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 02:48:24.443811 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 02:48:24.443814 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 02:48:24.443818 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-29 02:48:24.443822 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.443826 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-29 02:48:24.443830 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:24.443834 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-29 02:48:24.443839 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:24.443843 | orchestrator | 2026-03-29 02:48:24.443847 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-29 02:48:24.443855 | orchestrator | Sunday 29 March 2026 02:48:15 +0000 (0:00:02.001) 0:02:53.418 ********** 2026-03-29 02:48:24.443860 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:48:24.443864 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:48:24.443868 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:48:24.443873 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:48:24.443877 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:48:24.443881 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:48:24.443886 | orchestrator | 2026-03-29 02:48:24.443890 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:48:24.443894 | orchestrator | Sunday 29 March 2026 02:48:17 +0000 (0:00:02.801) 0:02:56.219 ********** 2026-03-29 02:48:24.443899 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:48:24.443906 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:48:24.443910 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:48:24.443915 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:48:24.443921 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:48:24.443928 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:48:24.443932 | orchestrator | 2026-03-29 02:48:24.443937 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 02:48:24.443941 | orchestrator | Sunday 29 March 2026 02:48:18 +0000 (0:00:01.036) 0:02:57.256 ********** 2026-03-29 02:48:24.443945 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:24.443995 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:24.444003 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:24.444008 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:48:24.444012 | orchestrator | 2026-03-29 02:48:24.444017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 02:48:24.444021 | orchestrator | Sunday 29 March 2026 02:48:20 +0000 (0:00:01.305) 0:02:58.561 ********** 2026-03-29 02:48:24.444025 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:48:24.444030 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:48:24.444034 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:48:24.444038 | orchestrator | 2026-03-29 02:48:24.444043 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 02:48:24.444047 | orchestrator | Sunday 29 March 2026 02:48:20 +0000 (0:00:00.344) 0:02:58.906 ********** 2026-03-29 02:48:24.444051 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:48:24.444056 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:48:24.444060 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:48:24.444064 | orchestrator | 2026-03-29 02:48:24.444069 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 02:48:24.444073 | orchestrator | Sunday 29 March 2026 02:48:22 +0000 (0:00:01.598) 0:03:00.504 ********** 2026-03-29 02:48:24.444077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 02:48:24.444082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 02:48:24.444088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 02:48:24.444095 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.444099 | orchestrator | 2026-03-29 02:48:24.444104 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 02:48:24.444109 | orchestrator | Sunday 29 March 2026 02:48:22 +0000 (0:00:00.734) 0:03:01.239 ********** 2026-03-29 02:48:24.444116 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:48:24.444121 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:48:24.444125 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:48:24.444129 | orchestrator | 2026-03-29 02:48:24.444134 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 02:48:24.444138 | orchestrator | Sunday 29 March 2026 02:48:23 +0000 (0:00:00.389) 0:03:01.628 ********** 2026-03-29 02:48:24.444143 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:24.444151 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:42.692085 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:42.692220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:48:42.692235 | orchestrator | 2026-03-29 02:48:42.692247 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 02:48:42.692257 | orchestrator | Sunday 29 March 2026 02:48:24 +0000 (0:00:01.211) 0:03:02.840 ********** 2026-03-29 02:48:42.692266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:42.692277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:42.692286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:42.692295 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692305 | orchestrator | 2026-03-29 02:48:42.692315 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 02:48:42.692324 | orchestrator | Sunday 29 March 2026 02:48:24 +0000 (0:00:00.486) 0:03:03.326 ********** 2026-03-29 02:48:42.692333 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692342 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:42.692351 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:42.692360 | orchestrator | 2026-03-29 02:48:42.692369 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 02:48:42.692379 | orchestrator | Sunday 29 March 2026 02:48:25 +0000 (0:00:00.400) 0:03:03.726 ********** 2026-03-29 02:48:42.692387 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692396 | orchestrator | 2026-03-29 02:48:42.692405 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 02:48:42.692414 | orchestrator | Sunday 29 March 2026 02:48:25 +0000 (0:00:00.257) 0:03:03.984 ********** 2026-03-29 02:48:42.692423 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692431 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:42.692438 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:42.692447 | orchestrator | 2026-03-29 02:48:42.692455 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 02:48:42.692463 | orchestrator | Sunday 29 March 2026 02:48:26 +0000 (0:00:00.673) 0:03:04.657 ********** 2026-03-29 02:48:42.692472 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692481 | orchestrator | 2026-03-29 02:48:42.692489 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 02:48:42.692498 | orchestrator | Sunday 29 March 2026 02:48:26 +0000 (0:00:00.264) 0:03:04.922 ********** 2026-03-29 02:48:42.692507 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692516 | orchestrator | 2026-03-29 02:48:42.692525 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 02:48:42.692534 | orchestrator | Sunday 29 March 2026 02:48:26 +0000 (0:00:00.275) 0:03:05.197 ********** 2026-03-29 02:48:42.692543 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692552 | orchestrator | 2026-03-29 02:48:42.692560 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 02:48:42.692569 | orchestrator | Sunday 29 March 2026 02:48:26 +0000 (0:00:00.184) 0:03:05.382 ********** 2026-03-29 02:48:42.692578 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692586 | orchestrator | 2026-03-29 02:48:42.692610 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 02:48:42.692619 | orchestrator | Sunday 29 March 2026 02:48:27 +0000 (0:00:00.277) 0:03:05.659 ********** 2026-03-29 02:48:42.692627 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692637 | orchestrator | 2026-03-29 02:48:42.692646 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 02:48:42.692655 | orchestrator | Sunday 29 March 2026 02:48:27 +0000 (0:00:00.267) 0:03:05.926 ********** 2026-03-29 02:48:42.692663 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:42.692672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:42.692680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:42.692700 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692708 | orchestrator | 2026-03-29 02:48:42.692717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 02:48:42.692726 | orchestrator | Sunday 29 March 2026 02:48:28 +0000 (0:00:00.491) 0:03:06.418 ********** 2026-03-29 02:48:42.692734 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692743 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:42.692751 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:42.692760 | orchestrator | 2026-03-29 02:48:42.692770 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 02:48:42.692779 | orchestrator | Sunday 29 March 2026 02:48:28 +0000 (0:00:00.380) 0:03:06.798 ********** 2026-03-29 02:48:42.692787 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692796 | orchestrator | 2026-03-29 02:48:42.692805 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 02:48:42.692814 | orchestrator | Sunday 29 March 2026 02:48:28 +0000 (0:00:00.271) 0:03:07.070 ********** 2026-03-29 02:48:42.692823 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.692830 | orchestrator | 2026-03-29 02:48:42.692839 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 02:48:42.692848 | orchestrator | Sunday 29 March 2026 02:48:29 +0000 (0:00:00.907) 0:03:07.978 ********** 2026-03-29 02:48:42.692857 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:42.692866 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:42.692875 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:42.692885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:48:42.692894 | orchestrator | 2026-03-29 02:48:42.692904 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 02:48:42.692914 | orchestrator | Sunday 29 March 2026 02:48:30 +0000 (0:00:00.898) 0:03:08.876 ********** 2026-03-29 02:48:42.692923 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:42.692934 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:42.692945 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:42.692955 | orchestrator | 2026-03-29 02:48:42.693017 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 02:48:42.693028 | orchestrator | Sunday 29 March 2026 02:48:31 +0000 (0:00:00.648) 0:03:09.524 ********** 2026-03-29 02:48:42.693038 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:48:42.693047 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:48:42.693056 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:48:42.693066 | orchestrator | 2026-03-29 02:48:42.693074 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 02:48:42.693083 | orchestrator | Sunday 29 March 2026 02:48:32 +0000 (0:00:01.361) 0:03:10.885 ********** 2026-03-29 02:48:42.693092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:42.693100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:42.693109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:42.693118 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.693127 | orchestrator | 2026-03-29 02:48:42.693136 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 02:48:42.693144 | orchestrator | Sunday 29 March 2026 02:48:33 +0000 (0:00:00.678) 0:03:11.563 ********** 2026-03-29 02:48:42.693153 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:42.693162 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:42.693171 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:42.693180 | orchestrator | 2026-03-29 02:48:42.693188 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 02:48:42.693197 | orchestrator | Sunday 29 March 2026 02:48:33 +0000 (0:00:00.365) 0:03:11.929 ********** 2026-03-29 02:48:42.693205 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:42.693214 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:42.693219 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:42.693235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:48:42.693240 | orchestrator | 2026-03-29 02:48:42.693246 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 02:48:42.693251 | orchestrator | Sunday 29 March 2026 02:48:34 +0000 (0:00:01.252) 0:03:13.182 ********** 2026-03-29 02:48:42.693257 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:42.693262 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:42.693271 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:42.693280 | orchestrator | 2026-03-29 02:48:42.693289 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 02:48:42.693298 | orchestrator | Sunday 29 March 2026 02:48:35 +0000 (0:00:00.377) 0:03:13.560 ********** 2026-03-29 02:48:42.693307 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:48:42.693315 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:48:42.693324 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:48:42.693333 | orchestrator | 2026-03-29 02:48:42.693343 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 02:48:42.693352 | orchestrator | Sunday 29 March 2026 02:48:36 +0000 (0:00:01.211) 0:03:14.772 ********** 2026-03-29 02:48:42.693361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:48:42.693370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:48:42.693388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:48:42.693397 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.693405 | orchestrator | 2026-03-29 02:48:42.693414 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 02:48:42.693423 | orchestrator | Sunday 29 March 2026 02:48:37 +0000 (0:00:01.301) 0:03:16.073 ********** 2026-03-29 02:48:42.693432 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:48:42.693442 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:48:42.693451 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:48:42.693460 | orchestrator | 2026-03-29 02:48:42.693469 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-29 02:48:42.693478 | orchestrator | Sunday 29 March 2026 02:48:38 +0000 (0:00:00.406) 0:03:16.479 ********** 2026-03-29 02:48:42.693487 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.693498 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:42.693507 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:42.693516 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:48:42.693525 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:48:42.693534 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:48:42.693543 | orchestrator | 2026-03-29 02:48:42.693553 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 02:48:42.693562 | orchestrator | Sunday 29 March 2026 02:48:38 +0000 (0:00:00.726) 0:03:17.206 ********** 2026-03-29 02:48:42.693572 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:48:42.693581 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:48:42.693590 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:48:42.693599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:48:42.693608 | orchestrator | 2026-03-29 02:48:42.693617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 02:48:42.693625 | orchestrator | Sunday 29 March 2026 02:48:40 +0000 (0:00:01.250) 0:03:18.457 ********** 2026-03-29 02:48:42.693634 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:48:42.693644 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:48:42.693654 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:48:42.693663 | orchestrator | 2026-03-29 02:48:42.693674 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 02:48:42.693682 | orchestrator | Sunday 29 March 2026 02:48:40 +0000 (0:00:00.380) 0:03:18.837 ********** 2026-03-29 02:48:42.693690 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:48:42.693709 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:48:42.693718 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:48:42.693728 | orchestrator | 2026-03-29 02:48:42.693737 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 02:48:42.693747 | orchestrator | Sunday 29 March 2026 02:48:41 +0000 (0:00:01.564) 0:03:20.402 ********** 2026-03-29 02:48:42.693757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 02:48:42.693766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 02:48:42.693784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 02:49:00.396572 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.396679 | orchestrator | 2026-03-29 02:49:00.396694 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 02:49:00.396703 | orchestrator | Sunday 29 March 2026 02:48:42 +0000 (0:00:00.681) 0:03:21.084 ********** 2026-03-29 02:49:00.396711 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.396719 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.396726 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.396734 | orchestrator | 2026-03-29 02:49:00.396741 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-29 02:49:00.396748 | orchestrator | 2026-03-29 02:49:00.396755 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:49:00.396762 | orchestrator | Sunday 29 March 2026 02:48:43 +0000 (0:00:00.665) 0:03:21.750 ********** 2026-03-29 02:49:00.396771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:49:00.396779 | orchestrator | 2026-03-29 02:49:00.396786 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:49:00.396793 | orchestrator | Sunday 29 March 2026 02:48:44 +0000 (0:00:00.861) 0:03:22.611 ********** 2026-03-29 02:49:00.396800 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:49:00.396807 | orchestrator | 2026-03-29 02:49:00.396814 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:49:00.396821 | orchestrator | Sunday 29 March 2026 02:48:44 +0000 (0:00:00.615) 0:03:23.227 ********** 2026-03-29 02:49:00.396827 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.396834 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.396842 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.396849 | orchestrator | 2026-03-29 02:49:00.396855 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:49:00.396863 | orchestrator | Sunday 29 March 2026 02:48:45 +0000 (0:00:00.719) 0:03:23.946 ********** 2026-03-29 02:49:00.396869 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.396876 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.396883 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.396890 | orchestrator | 2026-03-29 02:49:00.396897 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:49:00.396904 | orchestrator | Sunday 29 March 2026 02:48:46 +0000 (0:00:00.603) 0:03:24.550 ********** 2026-03-29 02:49:00.396912 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.396919 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.396926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.396932 | orchestrator | 2026-03-29 02:49:00.396939 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:49:00.396946 | orchestrator | Sunday 29 March 2026 02:48:46 +0000 (0:00:00.339) 0:03:24.889 ********** 2026-03-29 02:49:00.396953 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.396960 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397010 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397019 | orchestrator | 2026-03-29 02:49:00.397042 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:49:00.397050 | orchestrator | Sunday 29 March 2026 02:48:46 +0000 (0:00:00.322) 0:03:25.212 ********** 2026-03-29 02:49:00.397075 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397082 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397089 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397096 | orchestrator | 2026-03-29 02:49:00.397103 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:49:00.397111 | orchestrator | Sunday 29 March 2026 02:48:47 +0000 (0:00:00.778) 0:03:25.990 ********** 2026-03-29 02:49:00.397118 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397126 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397132 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397139 | orchestrator | 2026-03-29 02:49:00.397147 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:49:00.397155 | orchestrator | Sunday 29 March 2026 02:48:48 +0000 (0:00:00.631) 0:03:26.622 ********** 2026-03-29 02:49:00.397162 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397169 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397177 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397184 | orchestrator | 2026-03-29 02:49:00.397192 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:49:00.397200 | orchestrator | Sunday 29 March 2026 02:48:48 +0000 (0:00:00.355) 0:03:26.978 ********** 2026-03-29 02:49:00.397207 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397215 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397240 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397248 | orchestrator | 2026-03-29 02:49:00.397254 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:49:00.397261 | orchestrator | Sunday 29 March 2026 02:48:49 +0000 (0:00:00.734) 0:03:27.712 ********** 2026-03-29 02:49:00.397268 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397275 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397282 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397288 | orchestrator | 2026-03-29 02:49:00.397294 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:49:00.397301 | orchestrator | Sunday 29 March 2026 02:48:50 +0000 (0:00:00.716) 0:03:28.429 ********** 2026-03-29 02:49:00.397308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397315 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397321 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397327 | orchestrator | 2026-03-29 02:49:00.397334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:49:00.397341 | orchestrator | Sunday 29 March 2026 02:48:50 +0000 (0:00:00.696) 0:03:29.126 ********** 2026-03-29 02:49:00.397349 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397356 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397363 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397371 | orchestrator | 2026-03-29 02:49:00.397378 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:49:00.397385 | orchestrator | Sunday 29 March 2026 02:48:51 +0000 (0:00:00.336) 0:03:29.462 ********** 2026-03-29 02:49:00.397412 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397420 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397426 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397433 | orchestrator | 2026-03-29 02:49:00.397441 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:49:00.397448 | orchestrator | Sunday 29 March 2026 02:48:51 +0000 (0:00:00.336) 0:03:29.799 ********** 2026-03-29 02:49:00.397456 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397463 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397470 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397478 | orchestrator | 2026-03-29 02:49:00.397485 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:49:00.397492 | orchestrator | Sunday 29 March 2026 02:48:52 +0000 (0:00:00.648) 0:03:30.447 ********** 2026-03-29 02:49:00.397500 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397516 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397523 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397531 | orchestrator | 2026-03-29 02:49:00.397538 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:49:00.397545 | orchestrator | Sunday 29 March 2026 02:48:52 +0000 (0:00:00.346) 0:03:30.793 ********** 2026-03-29 02:49:00.397553 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397560 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397567 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397574 | orchestrator | 2026-03-29 02:49:00.397581 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:49:00.397589 | orchestrator | Sunday 29 March 2026 02:48:52 +0000 (0:00:00.397) 0:03:31.191 ********** 2026-03-29 02:49:00.397596 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397604 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:49:00.397610 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:49:00.397617 | orchestrator | 2026-03-29 02:49:00.397623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:49:00.397631 | orchestrator | Sunday 29 March 2026 02:48:53 +0000 (0:00:00.382) 0:03:31.574 ********** 2026-03-29 02:49:00.397637 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397644 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397651 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397658 | orchestrator | 2026-03-29 02:49:00.397666 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:49:00.397673 | orchestrator | Sunday 29 March 2026 02:48:53 +0000 (0:00:00.661) 0:03:32.235 ********** 2026-03-29 02:49:00.397681 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397688 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397696 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397703 | orchestrator | 2026-03-29 02:49:00.397710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:49:00.397717 | orchestrator | Sunday 29 March 2026 02:48:54 +0000 (0:00:00.384) 0:03:32.620 ********** 2026-03-29 02:49:00.397725 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397732 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397739 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397746 | orchestrator | 2026-03-29 02:49:00.397760 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-29 02:49:00.397768 | orchestrator | Sunday 29 March 2026 02:48:54 +0000 (0:00:00.581) 0:03:33.201 ********** 2026-03-29 02:49:00.397775 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397782 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397789 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397796 | orchestrator | 2026-03-29 02:49:00.397804 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-29 02:49:00.397811 | orchestrator | Sunday 29 March 2026 02:48:55 +0000 (0:00:00.671) 0:03:33.874 ********** 2026-03-29 02:49:00.397819 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:49:00.397827 | orchestrator | 2026-03-29 02:49:00.397834 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-29 02:49:00.397842 | orchestrator | Sunday 29 March 2026 02:48:56 +0000 (0:00:00.652) 0:03:34.526 ********** 2026-03-29 02:49:00.397849 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:49:00.397856 | orchestrator | 2026-03-29 02:49:00.397864 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-29 02:49:00.397871 | orchestrator | Sunday 29 March 2026 02:48:56 +0000 (0:00:00.153) 0:03:34.679 ********** 2026-03-29 02:49:00.397878 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 02:49:00.397886 | orchestrator | 2026-03-29 02:49:00.397893 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-29 02:49:00.397901 | orchestrator | Sunday 29 March 2026 02:48:57 +0000 (0:00:01.071) 0:03:35.751 ********** 2026-03-29 02:49:00.397908 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397920 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397927 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397935 | orchestrator | 2026-03-29 02:49:00.397942 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-29 02:49:00.397948 | orchestrator | Sunday 29 March 2026 02:48:57 +0000 (0:00:00.365) 0:03:36.117 ********** 2026-03-29 02:49:00.397956 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:49:00.397964 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:49:00.397990 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:49:00.397997 | orchestrator | 2026-03-29 02:49:00.398004 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-29 02:49:00.398010 | orchestrator | Sunday 29 March 2026 02:48:58 +0000 (0:00:00.707) 0:03:36.825 ********** 2026-03-29 02:49:00.398062 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:49:00.398070 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:49:00.398077 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:49:00.398085 | orchestrator | 2026-03-29 02:49:00.398092 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-29 02:49:00.398100 | orchestrator | Sunday 29 March 2026 02:48:59 +0000 (0:00:01.166) 0:03:37.992 ********** 2026-03-29 02:49:00.398108 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:49:00.398115 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:49:00.398123 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:49:00.398130 | orchestrator | 2026-03-29 02:49:00.398144 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-29 02:50:12.095642 | orchestrator | Sunday 29 March 2026 02:49:00 +0000 (0:00:00.802) 0:03:38.794 ********** 2026-03-29 02:50:12.095729 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.095740 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.095747 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.095753 | orchestrator | 2026-03-29 02:50:12.095761 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-29 02:50:12.095768 | orchestrator | Sunday 29 March 2026 02:49:01 +0000 (0:00:01.100) 0:03:39.895 ********** 2026-03-29 02:50:12.095775 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.095782 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:12.095788 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:12.095795 | orchestrator | 2026-03-29 02:50:12.095802 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-29 02:50:12.095808 | orchestrator | Sunday 29 March 2026 02:49:02 +0000 (0:00:00.727) 0:03:40.623 ********** 2026-03-29 02:50:12.095814 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.095821 | orchestrator | 2026-03-29 02:50:12.095829 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-29 02:50:12.095840 | orchestrator | Sunday 29 March 2026 02:49:03 +0000 (0:00:01.412) 0:03:42.035 ********** 2026-03-29 02:50:12.095850 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.095860 | orchestrator | 2026-03-29 02:50:12.095876 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-29 02:50:12.095888 | orchestrator | Sunday 29 March 2026 02:49:04 +0000 (0:00:00.844) 0:03:42.879 ********** 2026-03-29 02:50:12.095898 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 02:50:12.095909 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:50:12.095920 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:50:12.095930 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:50:12.095942 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-29 02:50:12.095953 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:50:12.095965 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:50:12.095976 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-29 02:50:12.095987 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:50:12.096052 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-29 02:50:12.096061 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-29 02:50:12.096067 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-29 02:50:12.096073 | orchestrator | 2026-03-29 02:50:12.096080 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-29 02:50:12.096086 | orchestrator | Sunday 29 March 2026 02:49:07 +0000 (0:00:03.391) 0:03:46.271 ********** 2026-03-29 02:50:12.096092 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096098 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096104 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096110 | orchestrator | 2026-03-29 02:50:12.096127 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-29 02:50:12.096134 | orchestrator | Sunday 29 March 2026 02:49:09 +0000 (0:00:01.228) 0:03:47.500 ********** 2026-03-29 02:50:12.096140 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.096146 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:12.096153 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:12.096159 | orchestrator | 2026-03-29 02:50:12.096165 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-29 02:50:12.096171 | orchestrator | Sunday 29 March 2026 02:49:09 +0000 (0:00:00.706) 0:03:48.206 ********** 2026-03-29 02:50:12.096177 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.096183 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:12.096189 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:12.096207 | orchestrator | 2026-03-29 02:50:12.096222 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-29 02:50:12.096230 | orchestrator | Sunday 29 March 2026 02:49:10 +0000 (0:00:00.357) 0:03:48.563 ********** 2026-03-29 02:50:12.096237 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096244 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096251 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096258 | orchestrator | 2026-03-29 02:50:12.096266 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-29 02:50:12.096273 | orchestrator | Sunday 29 March 2026 02:49:11 +0000 (0:00:01.431) 0:03:49.995 ********** 2026-03-29 02:50:12.096280 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096287 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096295 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096302 | orchestrator | 2026-03-29 02:50:12.096311 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-29 02:50:12.096323 | orchestrator | Sunday 29 March 2026 02:49:12 +0000 (0:00:01.221) 0:03:51.217 ********** 2026-03-29 02:50:12.096332 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:12.096339 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:12.096345 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:12.096352 | orchestrator | 2026-03-29 02:50:12.096358 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-29 02:50:12.096364 | orchestrator | Sunday 29 March 2026 02:49:13 +0000 (0:00:00.706) 0:03:51.923 ********** 2026-03-29 02:50:12.096371 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:12.096377 | orchestrator | 2026-03-29 02:50:12.096384 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-29 02:50:12.096391 | orchestrator | Sunday 29 March 2026 02:49:14 +0000 (0:00:00.658) 0:03:52.582 ********** 2026-03-29 02:50:12.096397 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:12.096403 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:12.096409 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:12.096415 | orchestrator | 2026-03-29 02:50:12.096422 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-29 02:50:12.096446 | orchestrator | Sunday 29 March 2026 02:49:14 +0000 (0:00:00.679) 0:03:53.261 ********** 2026-03-29 02:50:12.096454 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:12.096473 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:12.096481 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:12.096499 | orchestrator | 2026-03-29 02:50:12.096506 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-29 02:50:12.096514 | orchestrator | Sunday 29 March 2026 02:49:15 +0000 (0:00:00.353) 0:03:53.615 ********** 2026-03-29 02:50:12.096521 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:12.096529 | orchestrator | 2026-03-29 02:50:12.096537 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-29 02:50:12.096550 | orchestrator | Sunday 29 March 2026 02:49:15 +0000 (0:00:00.593) 0:03:54.209 ********** 2026-03-29 02:50:12.096568 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096581 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096592 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096604 | orchestrator | 2026-03-29 02:50:12.096616 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-29 02:50:12.096627 | orchestrator | Sunday 29 March 2026 02:49:18 +0000 (0:00:02.217) 0:03:56.427 ********** 2026-03-29 02:50:12.096639 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096661 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096673 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096684 | orchestrator | 2026-03-29 02:50:12.096696 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-29 02:50:12.096707 | orchestrator | Sunday 29 March 2026 02:49:19 +0000 (0:00:01.247) 0:03:57.674 ********** 2026-03-29 02:50:12.096719 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096732 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096744 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096756 | orchestrator | 2026-03-29 02:50:12.096768 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-29 02:50:12.096778 | orchestrator | Sunday 29 March 2026 02:49:21 +0000 (0:00:01.905) 0:03:59.580 ********** 2026-03-29 02:50:12.096786 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:50:12.096793 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:50:12.096800 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:50:12.096807 | orchestrator | 2026-03-29 02:50:12.096814 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-29 02:50:12.096821 | orchestrator | Sunday 29 March 2026 02:49:23 +0000 (0:00:02.101) 0:04:01.682 ********** 2026-03-29 02:50:12.096828 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:12.096836 | orchestrator | 2026-03-29 02:50:12.096843 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-29 02:50:12.096850 | orchestrator | Sunday 29 March 2026 02:49:24 +0000 (0:00:00.912) 0:04:02.594 ********** 2026-03-29 02:50:12.096864 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-29 02:50:12.096871 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.096879 | orchestrator | 2026-03-29 02:50:12.096886 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-29 02:50:12.096893 | orchestrator | Sunday 29 March 2026 02:49:46 +0000 (0:00:22.154) 0:04:24.749 ********** 2026-03-29 02:50:12.096900 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:12.096907 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:12.096914 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:12.096921 | orchestrator | 2026-03-29 02:50:12.096929 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-29 02:50:12.096936 | orchestrator | Sunday 29 March 2026 02:49:55 +0000 (0:00:09.412) 0:04:34.163 ********** 2026-03-29 02:50:12.096943 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:12.096950 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:12.096957 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:12.096964 | orchestrator | 2026-03-29 02:50:12.096978 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-29 02:50:12.096986 | orchestrator | Sunday 29 March 2026 02:49:56 +0000 (0:00:00.338) 0:04:34.502 ********** 2026-03-29 02:50:12.096995 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 02:50:12.097059 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 02:50:12.097068 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 02:50:12.097086 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 02:50:27.151437 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 02:50:27.151545 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6a84901dd8a6329b50b22bdb9247fb9ae7050447'}])  2026-03-29 02:50:27.151560 | orchestrator | 2026-03-29 02:50:27.151571 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:50:27.151582 | orchestrator | Sunday 29 March 2026 02:50:12 +0000 (0:00:15.977) 0:04:50.479 ********** 2026-03-29 02:50:27.151591 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.151601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.151610 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.151619 | orchestrator | 2026-03-29 02:50:27.151628 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 02:50:27.151637 | orchestrator | Sunday 29 March 2026 02:50:12 +0000 (0:00:00.334) 0:04:50.814 ********** 2026-03-29 02:50:27.151646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:27.151655 | orchestrator | 2026-03-29 02:50:27.151664 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 02:50:27.151673 | orchestrator | Sunday 29 March 2026 02:50:13 +0000 (0:00:00.786) 0:04:51.600 ********** 2026-03-29 02:50:27.151682 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.151691 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.151701 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.151709 | orchestrator | 2026-03-29 02:50:27.151718 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 02:50:27.151748 | orchestrator | Sunday 29 March 2026 02:50:13 +0000 (0:00:00.344) 0:04:51.944 ********** 2026-03-29 02:50:27.151770 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.151779 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.151788 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.151796 | orchestrator | 2026-03-29 02:50:27.151805 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 02:50:27.151814 | orchestrator | Sunday 29 March 2026 02:50:13 +0000 (0:00:00.365) 0:04:52.310 ********** 2026-03-29 02:50:27.151823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 02:50:27.151832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 02:50:27.151841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 02:50:27.151849 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.151858 | orchestrator | 2026-03-29 02:50:27.151867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 02:50:27.151875 | orchestrator | Sunday 29 March 2026 02:50:14 +0000 (0:00:01.007) 0:04:53.318 ********** 2026-03-29 02:50:27.151884 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.151893 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.151901 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.151910 | orchestrator | 2026-03-29 02:50:27.151919 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-29 02:50:27.151927 | orchestrator | 2026-03-29 02:50:27.151936 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:50:27.151945 | orchestrator | Sunday 29 March 2026 02:50:15 +0000 (0:00:00.894) 0:04:54.212 ********** 2026-03-29 02:50:27.151955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:27.151967 | orchestrator | 2026-03-29 02:50:27.151977 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:50:27.151987 | orchestrator | Sunday 29 March 2026 02:50:16 +0000 (0:00:00.547) 0:04:54.760 ********** 2026-03-29 02:50:27.151998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:50:27.152061 | orchestrator | 2026-03-29 02:50:27.152072 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:50:27.152082 | orchestrator | Sunday 29 March 2026 02:50:17 +0000 (0:00:00.773) 0:04:55.533 ********** 2026-03-29 02:50:27.152092 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.152102 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.152112 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.152121 | orchestrator | 2026-03-29 02:50:27.152132 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:50:27.152142 | orchestrator | Sunday 29 March 2026 02:50:17 +0000 (0:00:00.762) 0:04:56.295 ********** 2026-03-29 02:50:27.152152 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152162 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152172 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152182 | orchestrator | 2026-03-29 02:50:27.152192 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:50:27.152202 | orchestrator | Sunday 29 March 2026 02:50:18 +0000 (0:00:00.325) 0:04:56.621 ********** 2026-03-29 02:50:27.152212 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152222 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152232 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152242 | orchestrator | 2026-03-29 02:50:27.152269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:50:27.152280 | orchestrator | Sunday 29 March 2026 02:50:18 +0000 (0:00:00.562) 0:04:57.183 ********** 2026-03-29 02:50:27.152291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152301 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152320 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152330 | orchestrator | 2026-03-29 02:50:27.152340 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:50:27.152351 | orchestrator | Sunday 29 March 2026 02:50:19 +0000 (0:00:00.318) 0:04:57.502 ********** 2026-03-29 02:50:27.152359 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.152373 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.152388 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.152403 | orchestrator | 2026-03-29 02:50:27.152419 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:50:27.152435 | orchestrator | Sunday 29 March 2026 02:50:19 +0000 (0:00:00.760) 0:04:58.263 ********** 2026-03-29 02:50:27.152450 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152466 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152481 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152497 | orchestrator | 2026-03-29 02:50:27.152507 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:50:27.152516 | orchestrator | Sunday 29 March 2026 02:50:20 +0000 (0:00:00.315) 0:04:58.579 ********** 2026-03-29 02:50:27.152524 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152533 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152542 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152550 | orchestrator | 2026-03-29 02:50:27.152559 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:50:27.152567 | orchestrator | Sunday 29 March 2026 02:50:20 +0000 (0:00:00.600) 0:04:59.179 ********** 2026-03-29 02:50:27.152576 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.152585 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.152613 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.152636 | orchestrator | 2026-03-29 02:50:27.152647 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:50:27.152658 | orchestrator | Sunday 29 March 2026 02:50:21 +0000 (0:00:00.758) 0:04:59.937 ********** 2026-03-29 02:50:27.152669 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.152679 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.152690 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.152701 | orchestrator | 2026-03-29 02:50:27.152712 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:50:27.152723 | orchestrator | Sunday 29 March 2026 02:50:23 +0000 (0:00:01.736) 0:05:01.674 ********** 2026-03-29 02:50:27.152734 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.152745 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.152763 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.152774 | orchestrator | 2026-03-29 02:50:27.152789 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:50:27.152808 | orchestrator | Sunday 29 March 2026 02:50:23 +0000 (0:00:00.301) 0:05:01.976 ********** 2026-03-29 02:50:27.152826 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.152958 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.152982 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.152999 | orchestrator | 2026-03-29 02:50:27.153050 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:50:27.153069 | orchestrator | Sunday 29 March 2026 02:50:24 +0000 (0:00:00.595) 0:05:02.572 ********** 2026-03-29 02:50:27.153086 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.153104 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.153121 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.153138 | orchestrator | 2026-03-29 02:50:27.153166 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:50:27.153186 | orchestrator | Sunday 29 March 2026 02:50:24 +0000 (0:00:00.320) 0:05:02.892 ********** 2026-03-29 02:50:27.153204 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.153222 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.153239 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.153256 | orchestrator | 2026-03-29 02:50:27.153291 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:50:27.153308 | orchestrator | Sunday 29 March 2026 02:50:24 +0000 (0:00:00.313) 0:05:03.206 ********** 2026-03-29 02:50:27.153326 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.153345 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.153364 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.153382 | orchestrator | 2026-03-29 02:50:27.153399 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:50:27.153417 | orchestrator | Sunday 29 March 2026 02:50:25 +0000 (0:00:00.588) 0:05:03.794 ********** 2026-03-29 02:50:27.153434 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.153453 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.153472 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.153490 | orchestrator | 2026-03-29 02:50:27.153507 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:50:27.153524 | orchestrator | Sunday 29 March 2026 02:50:25 +0000 (0:00:00.383) 0:05:04.177 ********** 2026-03-29 02:50:27.153542 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:50:27.153560 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:50:27.153579 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:50:27.153598 | orchestrator | 2026-03-29 02:50:27.153638 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:50:27.153673 | orchestrator | Sunday 29 March 2026 02:50:26 +0000 (0:00:00.350) 0:05:04.527 ********** 2026-03-29 02:50:27.153692 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.153711 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.153730 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.153748 | orchestrator | 2026-03-29 02:50:27.153768 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:50:27.153781 | orchestrator | Sunday 29 March 2026 02:50:26 +0000 (0:00:00.365) 0:05:04.893 ********** 2026-03-29 02:50:27.153791 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:50:27.153802 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:50:27.153813 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:50:27.153824 | orchestrator | 2026-03-29 02:50:27.153835 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:50:27.153862 | orchestrator | Sunday 29 March 2026 02:50:27 +0000 (0:00:00.641) 0:05:05.535 ********** 2026-03-29 02:51:41.497203 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:51:41.497307 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:51:41.497319 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:51:41.497326 | orchestrator | 2026-03-29 02:51:41.497335 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-29 02:51:41.497343 | orchestrator | Sunday 29 March 2026 02:50:27 +0000 (0:00:00.590) 0:05:06.126 ********** 2026-03-29 02:51:41.497350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 02:51:41.497355 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:51:41.497362 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:51:41.497368 | orchestrator | 2026-03-29 02:51:41.497374 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-29 02:51:41.497381 | orchestrator | Sunday 29 March 2026 02:50:28 +0000 (0:00:00.931) 0:05:07.057 ********** 2026-03-29 02:51:41.497387 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:51:41.497395 | orchestrator | 2026-03-29 02:51:41.497401 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-29 02:51:41.497408 | orchestrator | Sunday 29 March 2026 02:50:29 +0000 (0:00:00.733) 0:05:07.791 ********** 2026-03-29 02:51:41.497415 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:51:41.497423 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:51:41.497429 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:51:41.497435 | orchestrator | 2026-03-29 02:51:41.497442 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-29 02:51:41.497470 | orchestrator | Sunday 29 March 2026 02:50:30 +0000 (0:00:00.692) 0:05:08.483 ********** 2026-03-29 02:51:41.497477 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497484 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497489 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:51:41.497495 | orchestrator | 2026-03-29 02:51:41.497501 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-29 02:51:41.497508 | orchestrator | Sunday 29 March 2026 02:50:30 +0000 (0:00:00.325) 0:05:08.808 ********** 2026-03-29 02:51:41.497515 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 02:51:41.497521 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 02:51:41.497528 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 02:51:41.497535 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-29 02:51:41.497541 | orchestrator | 2026-03-29 02:51:41.497560 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-29 02:51:41.497567 | orchestrator | Sunday 29 March 2026 02:50:41 +0000 (0:00:11.153) 0:05:19.962 ********** 2026-03-29 02:51:41.497573 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:51:41.497577 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:51:41.497581 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:51:41.497584 | orchestrator | 2026-03-29 02:51:41.497588 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-29 02:51:41.497592 | orchestrator | Sunday 29 March 2026 02:50:41 +0000 (0:00:00.350) 0:05:20.313 ********** 2026-03-29 02:51:41.497596 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 02:51:41.497600 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 02:51:41.497603 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 02:51:41.497607 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 02:51:41.497611 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:51:41.497615 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:51:41.497619 | orchestrator | 2026-03-29 02:51:41.497622 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-29 02:51:41.497626 | orchestrator | Sunday 29 March 2026 02:50:44 +0000 (0:00:02.562) 0:05:22.876 ********** 2026-03-29 02:51:41.497630 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 02:51:41.497634 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 02:51:41.497637 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 02:51:41.497641 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 02:51:41.497645 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 02:51:41.497649 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 02:51:41.497652 | orchestrator | 2026-03-29 02:51:41.497656 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-29 02:51:41.497660 | orchestrator | Sunday 29 March 2026 02:50:45 +0000 (0:00:01.345) 0:05:24.221 ********** 2026-03-29 02:51:41.497663 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:51:41.497667 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:51:41.497671 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:51:41.497675 | orchestrator | 2026-03-29 02:51:41.497678 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-29 02:51:41.497682 | orchestrator | Sunday 29 March 2026 02:50:46 +0000 (0:00:00.684) 0:05:24.906 ********** 2026-03-29 02:51:41.497686 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497690 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497693 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:51:41.497697 | orchestrator | 2026-03-29 02:51:41.497701 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-29 02:51:41.497705 | orchestrator | Sunday 29 March 2026 02:50:47 +0000 (0:00:00.620) 0:05:25.526 ********** 2026-03-29 02:51:41.497708 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497717 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497721 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:51:41.497725 | orchestrator | 2026-03-29 02:51:41.497730 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-29 02:51:41.497734 | orchestrator | Sunday 29 March 2026 02:50:47 +0000 (0:00:00.325) 0:05:25.852 ********** 2026-03-29 02:51:41.497738 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:51:41.497743 | orchestrator | 2026-03-29 02:51:41.497760 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-29 02:51:41.497765 | orchestrator | Sunday 29 March 2026 02:50:47 +0000 (0:00:00.523) 0:05:26.375 ********** 2026-03-29 02:51:41.497769 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497773 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497778 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:51:41.497782 | orchestrator | 2026-03-29 02:51:41.497786 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-29 02:51:41.497790 | orchestrator | Sunday 29 March 2026 02:50:48 +0000 (0:00:00.539) 0:05:26.914 ********** 2026-03-29 02:51:41.497795 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497803 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:51:41.497807 | orchestrator | 2026-03-29 02:51:41.497811 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-29 02:51:41.497815 | orchestrator | Sunday 29 March 2026 02:50:48 +0000 (0:00:00.343) 0:05:27.258 ********** 2026-03-29 02:51:41.497820 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:51:41.497824 | orchestrator | 2026-03-29 02:51:41.497828 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-29 02:51:41.497832 | orchestrator | Sunday 29 March 2026 02:50:49 +0000 (0:00:00.530) 0:05:27.788 ********** 2026-03-29 02:51:41.497837 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:51:41.497841 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:51:41.497845 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:51:41.497849 | orchestrator | 2026-03-29 02:51:41.497854 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-29 02:51:41.497858 | orchestrator | Sunday 29 March 2026 02:50:50 +0000 (0:00:01.576) 0:05:29.365 ********** 2026-03-29 02:51:41.497862 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:51:41.497867 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:51:41.497871 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:51:41.497875 | orchestrator | 2026-03-29 02:51:41.497879 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-29 02:51:41.497884 | orchestrator | Sunday 29 March 2026 02:50:52 +0000 (0:00:01.221) 0:05:30.586 ********** 2026-03-29 02:51:41.497890 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:51:41.497896 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:51:41.497902 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:51:41.497908 | orchestrator | 2026-03-29 02:51:41.497914 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-29 02:51:41.497925 | orchestrator | Sunday 29 March 2026 02:50:53 +0000 (0:00:01.818) 0:05:32.404 ********** 2026-03-29 02:51:41.497932 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:51:41.497938 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:51:41.497944 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:51:41.497951 | orchestrator | 2026-03-29 02:51:41.497956 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-29 02:51:41.497961 | orchestrator | Sunday 29 March 2026 02:50:55 +0000 (0:00:01.949) 0:05:34.353 ********** 2026-03-29 02:51:41.497965 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:51:41.497970 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:51:41.497974 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-29 02:51:41.497983 | orchestrator | 2026-03-29 02:51:41.497987 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-29 02:51:41.497991 | orchestrator | Sunday 29 March 2026 02:50:56 +0000 (0:00:00.677) 0:05:35.031 ********** 2026-03-29 02:51:41.497995 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-29 02:51:41.497999 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-29 02:51:41.498004 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-29 02:51:41.498010 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-29 02:51:41.498086 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-29 02:51:41.498092 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-29 02:51:41.498096 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:51:41.498100 | orchestrator | 2026-03-29 02:51:41.498104 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-29 02:51:41.498108 | orchestrator | Sunday 29 March 2026 02:51:32 +0000 (0:00:36.333) 0:06:11.365 ********** 2026-03-29 02:51:41.498111 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:51:41.498115 | orchestrator | 2026-03-29 02:51:41.498119 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-29 02:51:41.498123 | orchestrator | Sunday 29 March 2026 02:51:34 +0000 (0:00:01.390) 0:06:12.755 ********** 2026-03-29 02:51:41.498126 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:51:41.498130 | orchestrator | 2026-03-29 02:51:41.498134 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-29 02:51:41.498137 | orchestrator | Sunday 29 March 2026 02:51:34 +0000 (0:00:00.341) 0:06:13.097 ********** 2026-03-29 02:51:41.498141 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:51:41.498145 | orchestrator | 2026-03-29 02:51:41.498148 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-29 02:51:41.498152 | orchestrator | Sunday 29 March 2026 02:51:34 +0000 (0:00:00.147) 0:06:13.245 ********** 2026-03-29 02:51:41.498156 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-29 02:51:41.498160 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-29 02:51:41.498169 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-29 02:52:03.220749 | orchestrator | 2026-03-29 02:52:03.220832 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-29 02:52:03.220842 | orchestrator | Sunday 29 March 2026 02:51:41 +0000 (0:00:06.651) 0:06:19.896 ********** 2026-03-29 02:52:03.220851 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-29 02:52:03.220862 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-29 02:52:03.220869 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-29 02:52:03.220876 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-29 02:52:03.220882 | orchestrator | 2026-03-29 02:52:03.220889 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:52:03.220896 | orchestrator | Sunday 29 March 2026 02:51:46 +0000 (0:00:05.028) 0:06:24.925 ********** 2026-03-29 02:52:03.220901 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:52:03.220910 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:52:03.220917 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:52:03.220923 | orchestrator | 2026-03-29 02:52:03.220930 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 02:52:03.220937 | orchestrator | Sunday 29 March 2026 02:51:47 +0000 (0:00:00.710) 0:06:25.635 ********** 2026-03-29 02:52:03.220965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:52:03.220973 | orchestrator | 2026-03-29 02:52:03.220978 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 02:52:03.220982 | orchestrator | Sunday 29 March 2026 02:51:48 +0000 (0:00:00.814) 0:06:26.449 ********** 2026-03-29 02:52:03.220986 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:52:03.220990 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:52:03.220994 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:52:03.220997 | orchestrator | 2026-03-29 02:52:03.221001 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 02:52:03.221005 | orchestrator | Sunday 29 March 2026 02:51:48 +0000 (0:00:00.363) 0:06:26.812 ********** 2026-03-29 02:52:03.221009 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:52:03.221013 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:52:03.221016 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:52:03.221020 | orchestrator | 2026-03-29 02:52:03.221064 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 02:52:03.221079 | orchestrator | Sunday 29 March 2026 02:51:49 +0000 (0:00:01.293) 0:06:28.106 ********** 2026-03-29 02:52:03.221083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 02:52:03.221087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 02:52:03.221091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 02:52:03.221095 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:52:03.221099 | orchestrator | 2026-03-29 02:52:03.221103 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 02:52:03.221107 | orchestrator | Sunday 29 March 2026 02:51:50 +0000 (0:00:00.859) 0:06:28.966 ********** 2026-03-29 02:52:03.221111 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:52:03.221114 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:52:03.221118 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:52:03.221122 | orchestrator | 2026-03-29 02:52:03.221127 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-29 02:52:03.221131 | orchestrator | 2026-03-29 02:52:03.221135 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:52:03.221138 | orchestrator | Sunday 29 March 2026 02:51:51 +0000 (0:00:00.831) 0:06:29.798 ********** 2026-03-29 02:52:03.221143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:52:03.221148 | orchestrator | 2026-03-29 02:52:03.221152 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:52:03.221156 | orchestrator | Sunday 29 March 2026 02:51:51 +0000 (0:00:00.536) 0:06:30.334 ********** 2026-03-29 02:52:03.221160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:52:03.221164 | orchestrator | 2026-03-29 02:52:03.221168 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:52:03.221172 | orchestrator | Sunday 29 March 2026 02:51:52 +0000 (0:00:00.812) 0:06:31.146 ********** 2026-03-29 02:52:03.221176 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221180 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221184 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221188 | orchestrator | 2026-03-29 02:52:03.221191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:52:03.221195 | orchestrator | Sunday 29 March 2026 02:51:53 +0000 (0:00:00.323) 0:06:31.469 ********** 2026-03-29 02:52:03.221199 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221203 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221207 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221211 | orchestrator | 2026-03-29 02:52:03.221215 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:52:03.221223 | orchestrator | Sunday 29 March 2026 02:51:53 +0000 (0:00:00.706) 0:06:32.176 ********** 2026-03-29 02:52:03.221227 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221234 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221240 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221250 | orchestrator | 2026-03-29 02:52:03.221256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:52:03.221262 | orchestrator | Sunday 29 March 2026 02:51:54 +0000 (0:00:00.739) 0:06:32.916 ********** 2026-03-29 02:52:03.221268 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221275 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221281 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221287 | orchestrator | 2026-03-29 02:52:03.221293 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:52:03.221300 | orchestrator | Sunday 29 March 2026 02:51:55 +0000 (0:00:01.109) 0:06:34.025 ********** 2026-03-29 02:52:03.221319 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221326 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221332 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221339 | orchestrator | 2026-03-29 02:52:03.221345 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:52:03.221352 | orchestrator | Sunday 29 March 2026 02:51:55 +0000 (0:00:00.348) 0:06:34.373 ********** 2026-03-29 02:52:03.221358 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221364 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221371 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221377 | orchestrator | 2026-03-29 02:52:03.221384 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:52:03.221391 | orchestrator | Sunday 29 March 2026 02:51:56 +0000 (0:00:00.316) 0:06:34.690 ********** 2026-03-29 02:52:03.221398 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221405 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221416 | orchestrator | 2026-03-29 02:52:03.221420 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:52:03.221425 | orchestrator | Sunday 29 March 2026 02:51:56 +0000 (0:00:00.304) 0:06:34.995 ********** 2026-03-29 02:52:03.221429 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221434 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221438 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221443 | orchestrator | 2026-03-29 02:52:03.221447 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:52:03.221452 | orchestrator | Sunday 29 March 2026 02:51:57 +0000 (0:00:01.093) 0:06:36.089 ********** 2026-03-29 02:52:03.221456 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221461 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221465 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221471 | orchestrator | 2026-03-29 02:52:03.221478 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:52:03.221485 | orchestrator | Sunday 29 March 2026 02:51:58 +0000 (0:00:00.839) 0:06:36.928 ********** 2026-03-29 02:52:03.221492 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221498 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221505 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221511 | orchestrator | 2026-03-29 02:52:03.221518 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:52:03.221524 | orchestrator | Sunday 29 March 2026 02:51:58 +0000 (0:00:00.327) 0:06:37.256 ********** 2026-03-29 02:52:03.221530 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221542 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221549 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221554 | orchestrator | 2026-03-29 02:52:03.221561 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:52:03.221568 | orchestrator | Sunday 29 March 2026 02:51:59 +0000 (0:00:00.334) 0:06:37.590 ********** 2026-03-29 02:52:03.221580 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221587 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221594 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221601 | orchestrator | 2026-03-29 02:52:03.221608 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:52:03.221614 | orchestrator | Sunday 29 March 2026 02:51:59 +0000 (0:00:00.722) 0:06:38.313 ********** 2026-03-29 02:52:03.221622 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221628 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221632 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221637 | orchestrator | 2026-03-29 02:52:03.221643 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:52:03.221650 | orchestrator | Sunday 29 March 2026 02:52:00 +0000 (0:00:00.366) 0:06:38.680 ********** 2026-03-29 02:52:03.221657 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221663 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221670 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221676 | orchestrator | 2026-03-29 02:52:03.221683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:52:03.221690 | orchestrator | Sunday 29 March 2026 02:52:00 +0000 (0:00:00.325) 0:06:39.006 ********** 2026-03-29 02:52:03.221697 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221703 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221716 | orchestrator | 2026-03-29 02:52:03.221723 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:52:03.221729 | orchestrator | Sunday 29 March 2026 02:52:00 +0000 (0:00:00.289) 0:06:39.295 ********** 2026-03-29 02:52:03.221736 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221742 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221748 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221754 | orchestrator | 2026-03-29 02:52:03.221761 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:52:03.221767 | orchestrator | Sunday 29 March 2026 02:52:01 +0000 (0:00:00.573) 0:06:39.868 ********** 2026-03-29 02:52:03.221771 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:52:03.221775 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:52:03.221779 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:52:03.221782 | orchestrator | 2026-03-29 02:52:03.221786 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:52:03.221790 | orchestrator | Sunday 29 March 2026 02:52:01 +0000 (0:00:00.324) 0:06:40.193 ********** 2026-03-29 02:52:03.221794 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221798 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221802 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221806 | orchestrator | 2026-03-29 02:52:03.221809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:52:03.221813 | orchestrator | Sunday 29 March 2026 02:52:02 +0000 (0:00:00.329) 0:06:40.522 ********** 2026-03-29 02:52:03.221817 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221821 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221825 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:52:03.221829 | orchestrator | 2026-03-29 02:52:03.221833 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-29 02:52:03.221836 | orchestrator | Sunday 29 March 2026 02:52:02 +0000 (0:00:00.774) 0:06:41.297 ********** 2026-03-29 02:52:03.221840 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:52:03.221844 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:52:03.221853 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:53:04.535622 | orchestrator | 2026-03-29 02:53:04.535721 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-29 02:53:04.535734 | orchestrator | Sunday 29 March 2026 02:52:03 +0000 (0:00:00.325) 0:06:41.622 ********** 2026-03-29 02:53:04.535743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:53:04.535773 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:53:04.535781 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:53:04.535788 | orchestrator | 2026-03-29 02:53:04.535796 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-29 02:53:04.535804 | orchestrator | Sunday 29 March 2026 02:52:04 +0000 (0:00:00.901) 0:06:42.524 ********** 2026-03-29 02:53:04.535812 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:04.535819 | orchestrator | 2026-03-29 02:53:04.535827 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-29 02:53:04.535834 | orchestrator | Sunday 29 March 2026 02:52:04 +0000 (0:00:00.774) 0:06:43.298 ********** 2026-03-29 02:53:04.535842 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:04.535851 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:04.535858 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:04.535865 | orchestrator | 2026-03-29 02:53:04.535872 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-29 02:53:04.535880 | orchestrator | Sunday 29 March 2026 02:52:05 +0000 (0:00:00.325) 0:06:43.624 ********** 2026-03-29 02:53:04.535887 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:04.535894 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:04.535902 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:04.535909 | orchestrator | 2026-03-29 02:53:04.535916 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-29 02:53:04.535923 | orchestrator | Sunday 29 March 2026 02:52:05 +0000 (0:00:00.353) 0:06:43.977 ********** 2026-03-29 02:53:04.535931 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:53:04.535939 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:53:04.535946 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:53:04.535954 | orchestrator | 2026-03-29 02:53:04.535961 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-29 02:53:04.535981 | orchestrator | Sunday 29 March 2026 02:52:06 +0000 (0:00:00.668) 0:06:44.646 ********** 2026-03-29 02:53:04.535988 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:53:04.535996 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:53:04.536003 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:53:04.536010 | orchestrator | 2026-03-29 02:53:04.536017 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-29 02:53:04.536025 | orchestrator | Sunday 29 March 2026 02:52:06 +0000 (0:00:00.614) 0:06:45.260 ********** 2026-03-29 02:53:04.536100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 02:53:04.536110 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 02:53:04.536118 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 02:53:04.536126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 02:53:04.536138 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 02:53:04.536152 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 02:53:04.536165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 02:53:04.536178 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 02:53:04.536191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 02:53:04.536202 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 02:53:04.536211 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 02:53:04.536219 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 02:53:04.536236 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 02:53:04.536245 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 02:53:04.536254 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 02:53:04.536262 | orchestrator | 2026-03-29 02:53:04.536271 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-29 02:53:04.536278 | orchestrator | Sunday 29 March 2026 02:52:11 +0000 (0:00:04.330) 0:06:49.591 ********** 2026-03-29 02:53:04.536286 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:04.536293 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:04.536300 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:04.536307 | orchestrator | 2026-03-29 02:53:04.536315 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-29 02:53:04.536322 | orchestrator | Sunday 29 March 2026 02:52:11 +0000 (0:00:00.327) 0:06:49.918 ********** 2026-03-29 02:53:04.536329 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:04.536336 | orchestrator | 2026-03-29 02:53:04.536343 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-29 02:53:04.536350 | orchestrator | Sunday 29 March 2026 02:52:12 +0000 (0:00:00.767) 0:06:50.686 ********** 2026-03-29 02:53:04.536373 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 02:53:04.536382 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 02:53:04.536389 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 02:53:04.536396 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-29 02:53:04.536404 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-29 02:53:04.536411 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-29 02:53:04.536418 | orchestrator | 2026-03-29 02:53:04.536426 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-29 02:53:04.536433 | orchestrator | Sunday 29 March 2026 02:52:13 +0000 (0:00:01.085) 0:06:51.771 ********** 2026-03-29 02:53:04.536440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:53:04.536447 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:53:04.536455 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:53:04.536462 | orchestrator | 2026-03-29 02:53:04.536469 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-29 02:53:04.536476 | orchestrator | Sunday 29 March 2026 02:52:15 +0000 (0:00:02.270) 0:06:54.041 ********** 2026-03-29 02:53:04.536484 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 02:53:04.536491 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:53:04.536498 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:53:04.536506 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 02:53:04.536513 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 02:53:04.536520 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:53:04.536527 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 02:53:04.536535 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 02:53:04.536542 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:53:04.536549 | orchestrator | 2026-03-29 02:53:04.536556 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-29 02:53:04.536563 | orchestrator | Sunday 29 March 2026 02:52:16 +0000 (0:00:01.339) 0:06:55.381 ********** 2026-03-29 02:53:04.536571 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:53:04.536581 | orchestrator | 2026-03-29 02:53:04.536598 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-29 02:53:04.536617 | orchestrator | Sunday 29 March 2026 02:52:19 +0000 (0:00:02.179) 0:06:57.560 ********** 2026-03-29 02:53:04.536640 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:04.536652 | orchestrator | 2026-03-29 02:53:04.536663 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-29 02:53:04.536675 | orchestrator | Sunday 29 March 2026 02:52:20 +0000 (0:00:00.856) 0:06:58.417 ********** 2026-03-29 02:53:04.536699 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 02:53:04.536713 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 02:53:04.536724 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 02:53:04.536736 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 02:53:04.536746 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 02:53:04.536756 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 02:53:04.536767 | orchestrator | 2026-03-29 02:53:04.536778 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-29 02:53:04.536789 | orchestrator | Sunday 29 March 2026 02:52:59 +0000 (0:00:39.994) 0:07:38.411 ********** 2026-03-29 02:53:04.536800 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:04.536811 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:04.536822 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:04.536833 | orchestrator | 2026-03-29 02:53:04.536844 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-29 02:53:04.536856 | orchestrator | Sunday 29 March 2026 02:53:00 +0000 (0:00:00.333) 0:07:38.745 ********** 2026-03-29 02:53:04.536868 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:04.536879 | orchestrator | 2026-03-29 02:53:04.536891 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-29 02:53:04.536903 | orchestrator | Sunday 29 March 2026 02:53:01 +0000 (0:00:00.794) 0:07:39.540 ********** 2026-03-29 02:53:04.536915 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:53:04.536926 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:53:04.536937 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:53:04.536947 | orchestrator | 2026-03-29 02:53:04.536959 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-29 02:53:04.536971 | orchestrator | Sunday 29 March 2026 02:53:01 +0000 (0:00:00.710) 0:07:40.251 ********** 2026-03-29 02:53:04.536982 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:53:04.536995 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:53:04.537006 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:53:04.537017 | orchestrator | 2026-03-29 02:53:04.537062 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-29 02:53:39.702333 | orchestrator | Sunday 29 March 2026 02:53:04 +0000 (0:00:02.681) 0:07:42.932 ********** 2026-03-29 02:53:39.702447 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:39.702464 | orchestrator | 2026-03-29 02:53:39.702477 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-29 02:53:39.702489 | orchestrator | Sunday 29 March 2026 02:53:05 +0000 (0:00:00.787) 0:07:43.719 ********** 2026-03-29 02:53:39.702500 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:53:39.702512 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:53:39.702548 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:53:39.702560 | orchestrator | 2026-03-29 02:53:39.702571 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-29 02:53:39.702582 | orchestrator | Sunday 29 March 2026 02:53:06 +0000 (0:00:01.237) 0:07:44.957 ********** 2026-03-29 02:53:39.702593 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:53:39.702604 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:53:39.702615 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:53:39.702625 | orchestrator | 2026-03-29 02:53:39.702636 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-29 02:53:39.702647 | orchestrator | Sunday 29 March 2026 02:53:07 +0000 (0:00:01.145) 0:07:46.103 ********** 2026-03-29 02:53:39.702658 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:53:39.702669 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:53:39.702679 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:53:39.702690 | orchestrator | 2026-03-29 02:53:39.702701 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-29 02:53:39.702711 | orchestrator | Sunday 29 March 2026 02:53:09 +0000 (0:00:01.908) 0:07:48.011 ********** 2026-03-29 02:53:39.702722 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.702733 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.702744 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.702754 | orchestrator | 2026-03-29 02:53:39.702765 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-29 02:53:39.702776 | orchestrator | Sunday 29 March 2026 02:53:09 +0000 (0:00:00.292) 0:07:48.304 ********** 2026-03-29 02:53:39.702787 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.702798 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.702808 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.702819 | orchestrator | 2026-03-29 02:53:39.702844 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-29 02:53:39.702855 | orchestrator | Sunday 29 March 2026 02:53:10 +0000 (0:00:00.288) 0:07:48.593 ********** 2026-03-29 02:53:39.702866 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-29 02:53:39.702877 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-29 02:53:39.702888 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-29 02:53:39.702898 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 02:53:39.702909 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-29 02:53:39.702920 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-29 02:53:39.702930 | orchestrator | 2026-03-29 02:53:39.702941 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-29 02:53:39.702952 | orchestrator | Sunday 29 March 2026 02:53:11 +0000 (0:00:00.972) 0:07:49.565 ********** 2026-03-29 02:53:39.702963 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-29 02:53:39.702974 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-29 02:53:39.702985 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-29 02:53:39.702996 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 02:53:39.703006 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-29 02:53:39.703017 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-29 02:53:39.703027 | orchestrator | 2026-03-29 02:53:39.703061 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-29 02:53:39.703073 | orchestrator | Sunday 29 March 2026 02:53:13 +0000 (0:00:02.385) 0:07:51.950 ********** 2026-03-29 02:53:39.703084 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-29 02:53:39.703095 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-29 02:53:39.703105 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-29 02:53:39.703116 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-29 02:53:39.703127 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-29 02:53:39.703137 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 02:53:39.703148 | orchestrator | 2026-03-29 02:53:39.703159 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-29 02:53:39.703178 | orchestrator | Sunday 29 March 2026 02:53:16 +0000 (0:00:03.446) 0:07:55.397 ********** 2026-03-29 02:53:39.703189 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703200 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703211 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:53:39.703221 | orchestrator | 2026-03-29 02:53:39.703232 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-29 02:53:39.703243 | orchestrator | Sunday 29 March 2026 02:53:19 +0000 (0:00:02.620) 0:07:58.018 ********** 2026-03-29 02:53:39.703254 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703265 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703276 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-29 02:53:39.703287 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:53:39.703298 | orchestrator | 2026-03-29 02:53:39.703309 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-29 02:53:39.703320 | orchestrator | Sunday 29 March 2026 02:53:32 +0000 (0:00:12.589) 0:08:10.607 ********** 2026-03-29 02:53:39.703330 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703341 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703353 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.703364 | orchestrator | 2026-03-29 02:53:39.703375 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:53:39.703386 | orchestrator | Sunday 29 March 2026 02:53:33 +0000 (0:00:01.184) 0:08:11.792 ********** 2026-03-29 02:53:39.703442 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703456 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703467 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.703478 | orchestrator | 2026-03-29 02:53:39.703489 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 02:53:39.703499 | orchestrator | Sunday 29 March 2026 02:53:34 +0000 (0:00:00.628) 0:08:12.421 ********** 2026-03-29 02:53:39.703510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:53:39.703521 | orchestrator | 2026-03-29 02:53:39.703532 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 02:53:39.703543 | orchestrator | Sunday 29 March 2026 02:53:34 +0000 (0:00:00.565) 0:08:12.986 ********** 2026-03-29 02:53:39.703553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:53:39.703564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:53:39.703575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:53:39.703586 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703597 | orchestrator | 2026-03-29 02:53:39.703608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 02:53:39.703618 | orchestrator | Sunday 29 March 2026 02:53:35 +0000 (0:00:00.451) 0:08:13.438 ********** 2026-03-29 02:53:39.703629 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703640 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703651 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.703662 | orchestrator | 2026-03-29 02:53:39.703673 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 02:53:39.703683 | orchestrator | Sunday 29 March 2026 02:53:35 +0000 (0:00:00.327) 0:08:13.766 ********** 2026-03-29 02:53:39.703694 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703705 | orchestrator | 2026-03-29 02:53:39.703716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 02:53:39.703727 | orchestrator | Sunday 29 March 2026 02:53:35 +0000 (0:00:00.216) 0:08:13.982 ********** 2026-03-29 02:53:39.703738 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703749 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.703759 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.703779 | orchestrator | 2026-03-29 02:53:39.703796 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 02:53:39.703807 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.476) 0:08:14.459 ********** 2026-03-29 02:53:39.703818 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703829 | orchestrator | 2026-03-29 02:53:39.703840 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 02:53:39.703850 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.199) 0:08:14.658 ********** 2026-03-29 02:53:39.703861 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703872 | orchestrator | 2026-03-29 02:53:39.703883 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 02:53:39.703894 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.215) 0:08:14.873 ********** 2026-03-29 02:53:39.703904 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703915 | orchestrator | 2026-03-29 02:53:39.703926 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 02:53:39.703937 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.114) 0:08:14.988 ********** 2026-03-29 02:53:39.703948 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.703959 | orchestrator | 2026-03-29 02:53:39.703969 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 02:53:39.703980 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.203) 0:08:15.191 ********** 2026-03-29 02:53:39.703991 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.704002 | orchestrator | 2026-03-29 02:53:39.704013 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 02:53:39.704024 | orchestrator | Sunday 29 March 2026 02:53:36 +0000 (0:00:00.208) 0:08:15.400 ********** 2026-03-29 02:53:39.704034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:53:39.704065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:53:39.704076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:53:39.704087 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.704098 | orchestrator | 2026-03-29 02:53:39.704109 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 02:53:39.704120 | orchestrator | Sunday 29 March 2026 02:53:37 +0000 (0:00:00.378) 0:08:15.779 ********** 2026-03-29 02:53:39.704131 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.704141 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:53:39.704152 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:53:39.704163 | orchestrator | 2026-03-29 02:53:39.704174 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 02:53:39.704185 | orchestrator | Sunday 29 March 2026 02:53:37 +0000 (0:00:00.434) 0:08:16.213 ********** 2026-03-29 02:53:39.704195 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.704206 | orchestrator | 2026-03-29 02:53:39.704217 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 02:53:39.704228 | orchestrator | Sunday 29 March 2026 02:53:38 +0000 (0:00:00.220) 0:08:16.433 ********** 2026-03-29 02:53:39.704239 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:53:39.704250 | orchestrator | 2026-03-29 02:53:39.704261 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-29 02:53:39.704272 | orchestrator | 2026-03-29 02:53:39.704283 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:53:39.704293 | orchestrator | Sunday 29 March 2026 02:53:38 +0000 (0:00:00.617) 0:08:17.050 ********** 2026-03-29 02:53:39.704305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:53:39.704317 | orchestrator | 2026-03-29 02:53:39.704335 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:54:03.276630 | orchestrator | Sunday 29 March 2026 02:53:39 +0000 (0:00:01.050) 0:08:18.101 ********** 2026-03-29 02:54:03.276796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:54:03.276853 | orchestrator | 2026-03-29 02:54:03.276868 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:54:03.276880 | orchestrator | Sunday 29 March 2026 02:53:40 +0000 (0:00:01.087) 0:08:19.188 ********** 2026-03-29 02:54:03.276892 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.276905 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.276916 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.276927 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.276938 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.276949 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.276960 | orchestrator | 2026-03-29 02:54:03.276971 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:54:03.276982 | orchestrator | Sunday 29 March 2026 02:53:41 +0000 (0:00:01.211) 0:08:20.400 ********** 2026-03-29 02:54:03.276993 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.277010 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.277034 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.277138 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.277158 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.277176 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.277195 | orchestrator | 2026-03-29 02:54:03.277215 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:54:03.277235 | orchestrator | Sunday 29 March 2026 02:53:42 +0000 (0:00:00.728) 0:08:21.129 ********** 2026-03-29 02:54:03.277255 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.277274 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.277294 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.277308 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.277318 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.277329 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.277340 | orchestrator | 2026-03-29 02:54:03.277351 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:54:03.277361 | orchestrator | Sunday 29 March 2026 02:53:43 +0000 (0:00:00.727) 0:08:21.856 ********** 2026-03-29 02:54:03.277388 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.277400 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.277410 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.277421 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.277432 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.277442 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.277453 | orchestrator | 2026-03-29 02:54:03.277464 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:54:03.277474 | orchestrator | Sunday 29 March 2026 02:53:44 +0000 (0:00:00.718) 0:08:22.574 ********** 2026-03-29 02:54:03.277485 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.277496 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.277506 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.277517 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.277528 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.277538 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.277549 | orchestrator | 2026-03-29 02:54:03.277560 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:54:03.277571 | orchestrator | Sunday 29 March 2026 02:53:45 +0000 (0:00:01.118) 0:08:23.693 ********** 2026-03-29 02:54:03.277582 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.277593 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.277603 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.277614 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.277625 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.277636 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.277662 | orchestrator | 2026-03-29 02:54:03.277673 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:54:03.277753 | orchestrator | Sunday 29 March 2026 02:53:45 +0000 (0:00:00.601) 0:08:24.295 ********** 2026-03-29 02:54:03.277764 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.277775 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.277786 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.277797 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.277807 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.277818 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.277828 | orchestrator | 2026-03-29 02:54:03.277839 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:54:03.277850 | orchestrator | Sunday 29 March 2026 02:53:46 +0000 (0:00:00.717) 0:08:25.012 ********** 2026-03-29 02:54:03.277861 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.277872 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.277883 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.277894 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.277904 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.277915 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.277926 | orchestrator | 2026-03-29 02:54:03.277937 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:54:03.277948 | orchestrator | Sunday 29 March 2026 02:53:47 +0000 (0:00:00.989) 0:08:26.002 ********** 2026-03-29 02:54:03.277958 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.277969 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.277980 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.277990 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.278001 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.278012 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.278110 | orchestrator | 2026-03-29 02:54:03.278121 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:54:03.278133 | orchestrator | Sunday 29 March 2026 02:53:48 +0000 (0:00:01.203) 0:08:27.206 ********** 2026-03-29 02:54:03.278144 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.278155 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.278165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.278176 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.278187 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.278198 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.278211 | orchestrator | 2026-03-29 02:54:03.278231 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:54:03.278276 | orchestrator | Sunday 29 March 2026 02:53:49 +0000 (0:00:00.538) 0:08:27.744 ********** 2026-03-29 02:54:03.278296 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.278314 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.278333 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.278353 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.278372 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.278392 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.278411 | orchestrator | 2026-03-29 02:54:03.278430 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:54:03.278449 | orchestrator | Sunday 29 March 2026 02:53:50 +0000 (0:00:00.708) 0:08:28.452 ********** 2026-03-29 02:54:03.278469 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.278488 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.278508 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.278528 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.278548 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.278645 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.278656 | orchestrator | 2026-03-29 02:54:03.278667 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:54:03.278678 | orchestrator | Sunday 29 March 2026 02:53:50 +0000 (0:00:00.549) 0:08:29.002 ********** 2026-03-29 02:54:03.278689 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.278714 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.278724 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.278735 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.278746 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.278757 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.278767 | orchestrator | 2026-03-29 02:54:03.278778 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:54:03.278789 | orchestrator | Sunday 29 March 2026 02:53:51 +0000 (0:00:00.681) 0:08:29.684 ********** 2026-03-29 02:54:03.278800 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.278810 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.278889 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.278901 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.278912 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.278923 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.278933 | orchestrator | 2026-03-29 02:54:03.278944 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:54:03.278955 | orchestrator | Sunday 29 March 2026 02:53:51 +0000 (0:00:00.582) 0:08:30.267 ********** 2026-03-29 02:54:03.278966 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.278977 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.278989 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.279007 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.279025 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.279080 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.279100 | orchestrator | 2026-03-29 02:54:03.279120 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:54:03.279139 | orchestrator | Sunday 29 March 2026 02:53:52 +0000 (0:00:00.603) 0:08:30.870 ********** 2026-03-29 02:54:03.279155 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.279166 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.279177 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.279188 | orchestrator | skipping: [testbed-node-0] 2026-03-29 02:54:03.279199 | orchestrator | skipping: [testbed-node-1] 2026-03-29 02:54:03.279209 | orchestrator | skipping: [testbed-node-2] 2026-03-29 02:54:03.279220 | orchestrator | 2026-03-29 02:54:03.279231 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:54:03.279242 | orchestrator | Sunday 29 March 2026 02:53:52 +0000 (0:00:00.447) 0:08:31.318 ********** 2026-03-29 02:54:03.279253 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:03.279263 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:03.279274 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:03.279285 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.279296 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.279306 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.279317 | orchestrator | 2026-03-29 02:54:03.279373 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:54:03.279385 | orchestrator | Sunday 29 March 2026 02:53:53 +0000 (0:00:00.735) 0:08:32.053 ********** 2026-03-29 02:54:03.279397 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.279407 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.279419 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.279430 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.279440 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.279451 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.279462 | orchestrator | 2026-03-29 02:54:03.279472 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:54:03.279483 | orchestrator | Sunday 29 March 2026 02:53:54 +0000 (0:00:00.573) 0:08:32.627 ********** 2026-03-29 02:54:03.279494 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:03.279505 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:03.279516 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:03.279530 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.279550 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:03.279568 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:03.279600 | orchestrator | 2026-03-29 02:54:03.279618 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-29 02:54:03.279636 | orchestrator | Sunday 29 March 2026 02:53:55 +0000 (0:00:01.114) 0:08:33.741 ********** 2026-03-29 02:54:03.279656 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:54:03.279674 | orchestrator | 2026-03-29 02:54:03.279693 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-29 02:54:03.279713 | orchestrator | Sunday 29 March 2026 02:53:59 +0000 (0:00:04.388) 0:08:38.130 ********** 2026-03-29 02:54:03.279731 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:54:03.279750 | orchestrator | 2026-03-29 02:54:03.279768 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-29 02:54:03.279787 | orchestrator | Sunday 29 March 2026 02:54:01 +0000 (0:00:02.060) 0:08:40.191 ********** 2026-03-29 02:54:03.279806 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:54:03.279825 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:54:03.279843 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:54:03.279858 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:03.279869 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:54:03.279895 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:54:25.740355 | orchestrator | 2026-03-29 02:54:25.740456 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-29 02:54:25.740468 | orchestrator | Sunday 29 March 2026 02:54:03 +0000 (0:00:01.484) 0:08:41.675 ********** 2026-03-29 02:54:25.740476 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:54:25.740486 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:54:25.740493 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:54:25.740501 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:54:25.740508 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:54:25.740516 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:54:25.740523 | orchestrator | 2026-03-29 02:54:25.740530 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-29 02:54:25.740538 | orchestrator | Sunday 29 March 2026 02:54:04 +0000 (0:00:01.118) 0:08:42.794 ********** 2026-03-29 02:54:25.740547 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:54:25.740555 | orchestrator | 2026-03-29 02:54:25.740563 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-29 02:54:25.740570 | orchestrator | Sunday 29 March 2026 02:54:05 +0000 (0:00:01.072) 0:08:43.867 ********** 2026-03-29 02:54:25.740577 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:54:25.740584 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:54:25.740592 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:54:25.740599 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:54:25.740606 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:54:25.740613 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:54:25.740621 | orchestrator | 2026-03-29 02:54:25.740628 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-29 02:54:25.740636 | orchestrator | Sunday 29 March 2026 02:54:06 +0000 (0:00:01.489) 0:08:45.356 ********** 2026-03-29 02:54:25.740643 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:54:25.740650 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:54:25.740657 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:54:25.740664 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:54:25.740672 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:54:25.740679 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:54:25.740686 | orchestrator | 2026-03-29 02:54:25.740707 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-29 02:54:25.740715 | orchestrator | Sunday 29 March 2026 02:54:10 +0000 (0:00:03.379) 0:08:48.735 ********** 2026-03-29 02:54:25.740728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 02:54:25.740769 | orchestrator | 2026-03-29 02:54:25.740785 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-29 02:54:25.740796 | orchestrator | Sunday 29 March 2026 02:54:11 +0000 (0:00:01.113) 0:08:49.849 ********** 2026-03-29 02:54:25.740809 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.740823 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.740837 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.740849 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:25.740862 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:25.740870 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:25.740877 | orchestrator | 2026-03-29 02:54:25.740884 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-29 02:54:25.740891 | orchestrator | Sunday 29 March 2026 02:54:12 +0000 (0:00:00.565) 0:08:50.415 ********** 2026-03-29 02:54:25.740898 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:54:25.740908 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:54:25.740916 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:54:25.740924 | orchestrator | changed: [testbed-node-1] 2026-03-29 02:54:25.740932 | orchestrator | changed: [testbed-node-0] 2026-03-29 02:54:25.740940 | orchestrator | changed: [testbed-node-2] 2026-03-29 02:54:25.740949 | orchestrator | 2026-03-29 02:54:25.740957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-29 02:54:25.740966 | orchestrator | Sunday 29 March 2026 02:54:14 +0000 (0:00:02.311) 0:08:52.726 ********** 2026-03-29 02:54:25.740974 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.740982 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.740990 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.740999 | orchestrator | ok: [testbed-node-0] 2026-03-29 02:54:25.741007 | orchestrator | ok: [testbed-node-1] 2026-03-29 02:54:25.741015 | orchestrator | ok: [testbed-node-2] 2026-03-29 02:54:25.741023 | orchestrator | 2026-03-29 02:54:25.741032 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-29 02:54:25.741040 | orchestrator | 2026-03-29 02:54:25.741112 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:54:25.741124 | orchestrator | Sunday 29 March 2026 02:54:15 +0000 (0:00:00.969) 0:08:53.695 ********** 2026-03-29 02:54:25.741133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:54:25.741142 | orchestrator | 2026-03-29 02:54:25.741150 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:54:25.741158 | orchestrator | Sunday 29 March 2026 02:54:15 +0000 (0:00:00.578) 0:08:54.274 ********** 2026-03-29 02:54:25.741167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:54:25.741175 | orchestrator | 2026-03-29 02:54:25.741183 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:54:25.741192 | orchestrator | Sunday 29 March 2026 02:54:16 +0000 (0:00:00.909) 0:08:55.183 ********** 2026-03-29 02:54:25.741200 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741208 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741216 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741225 | orchestrator | 2026-03-29 02:54:25.741233 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:54:25.741241 | orchestrator | Sunday 29 March 2026 02:54:17 +0000 (0:00:00.321) 0:08:55.504 ********** 2026-03-29 02:54:25.741249 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741257 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741280 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741288 | orchestrator | 2026-03-29 02:54:25.741295 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:54:25.741302 | orchestrator | Sunday 29 March 2026 02:54:17 +0000 (0:00:00.739) 0:08:56.244 ********** 2026-03-29 02:54:25.741318 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741325 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741332 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741339 | orchestrator | 2026-03-29 02:54:25.741346 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:54:25.741354 | orchestrator | Sunday 29 March 2026 02:54:18 +0000 (0:00:00.767) 0:08:57.012 ********** 2026-03-29 02:54:25.741361 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741368 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741375 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741382 | orchestrator | 2026-03-29 02:54:25.741389 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:54:25.741396 | orchestrator | Sunday 29 March 2026 02:54:19 +0000 (0:00:01.132) 0:08:58.144 ********** 2026-03-29 02:54:25.741403 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741414 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741430 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741447 | orchestrator | 2026-03-29 02:54:25.741459 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:54:25.741471 | orchestrator | Sunday 29 March 2026 02:54:20 +0000 (0:00:00.322) 0:08:58.467 ********** 2026-03-29 02:54:25.741482 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741493 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741505 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741516 | orchestrator | 2026-03-29 02:54:25.741528 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:54:25.741539 | orchestrator | Sunday 29 March 2026 02:54:20 +0000 (0:00:00.330) 0:08:58.797 ********** 2026-03-29 02:54:25.741549 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741561 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741574 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741587 | orchestrator | 2026-03-29 02:54:25.741599 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:54:25.741611 | orchestrator | Sunday 29 March 2026 02:54:20 +0000 (0:00:00.312) 0:08:59.110 ********** 2026-03-29 02:54:25.741631 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741644 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741656 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741669 | orchestrator | 2026-03-29 02:54:25.741678 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:54:25.741685 | orchestrator | Sunday 29 March 2026 02:54:21 +0000 (0:00:01.044) 0:09:00.155 ********** 2026-03-29 02:54:25.741692 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741699 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741708 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741720 | orchestrator | 2026-03-29 02:54:25.741733 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:54:25.741751 | orchestrator | Sunday 29 March 2026 02:54:22 +0000 (0:00:00.754) 0:09:00.909 ********** 2026-03-29 02:54:25.741765 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741776 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741789 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741802 | orchestrator | 2026-03-29 02:54:25.741814 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:54:25.741827 | orchestrator | Sunday 29 March 2026 02:54:22 +0000 (0:00:00.333) 0:09:01.242 ********** 2026-03-29 02:54:25.741835 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.741842 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.741850 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.741857 | orchestrator | 2026-03-29 02:54:25.741864 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:54:25.741871 | orchestrator | Sunday 29 March 2026 02:54:23 +0000 (0:00:00.341) 0:09:01.584 ********** 2026-03-29 02:54:25.741878 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741885 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741900 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741907 | orchestrator | 2026-03-29 02:54:25.741914 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:54:25.741922 | orchestrator | Sunday 29 March 2026 02:54:23 +0000 (0:00:00.598) 0:09:02.182 ********** 2026-03-29 02:54:25.741929 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741936 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741943 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741950 | orchestrator | 2026-03-29 02:54:25.741957 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:54:25.741965 | orchestrator | Sunday 29 March 2026 02:54:24 +0000 (0:00:00.398) 0:09:02.581 ********** 2026-03-29 02:54:25.741972 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:54:25.741979 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:54:25.741986 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:54:25.741993 | orchestrator | 2026-03-29 02:54:25.742000 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:54:25.742007 | orchestrator | Sunday 29 March 2026 02:54:24 +0000 (0:00:00.350) 0:09:02.932 ********** 2026-03-29 02:54:25.742105 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.742117 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.742124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.742132 | orchestrator | 2026-03-29 02:54:25.742139 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:54:25.742147 | orchestrator | Sunday 29 March 2026 02:54:24 +0000 (0:00:00.307) 0:09:03.239 ********** 2026-03-29 02:54:25.742154 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.742161 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.742168 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.742176 | orchestrator | 2026-03-29 02:54:25.742183 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:54:25.742190 | orchestrator | Sunday 29 March 2026 02:54:25 +0000 (0:00:00.583) 0:09:03.822 ********** 2026-03-29 02:54:25.742197 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:54:25.742205 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:54:25.742212 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:54:25.742219 | orchestrator | 2026-03-29 02:54:25.742249 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:55:06.065561 | orchestrator | Sunday 29 March 2026 02:54:25 +0000 (0:00:00.317) 0:09:04.140 ********** 2026-03-29 02:55:06.065739 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:06.065758 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:06.065770 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:06.065782 | orchestrator | 2026-03-29 02:55:06.065795 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:55:06.065806 | orchestrator | Sunday 29 March 2026 02:54:26 +0000 (0:00:00.376) 0:09:04.517 ********** 2026-03-29 02:55:06.065818 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:06.065829 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:06.065839 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:06.065850 | orchestrator | 2026-03-29 02:55:06.065862 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-29 02:55:06.065873 | orchestrator | Sunday 29 March 2026 02:54:26 +0000 (0:00:00.799) 0:09:05.316 ********** 2026-03-29 02:55:06.065884 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:06.065896 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:06.065908 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-29 02:55:06.065919 | orchestrator | 2026-03-29 02:55:06.065930 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-29 02:55:06.065941 | orchestrator | Sunday 29 March 2026 02:54:27 +0000 (0:00:00.444) 0:09:05.761 ********** 2026-03-29 02:55:06.065952 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:55:06.065964 | orchestrator | 2026-03-29 02:55:06.065975 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-29 02:55:06.066098 | orchestrator | Sunday 29 March 2026 02:54:30 +0000 (0:00:03.173) 0:09:08.934 ********** 2026-03-29 02:55:06.066119 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-29 02:55:06.066136 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:06.066150 | orchestrator | 2026-03-29 02:55:06.066182 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-29 02:55:06.066195 | orchestrator | Sunday 29 March 2026 02:54:30 +0000 (0:00:00.225) 0:09:09.160 ********** 2026-03-29 02:55:06.066212 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:55:06.066237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:55:06.066248 | orchestrator | 2026-03-29 02:55:06.066259 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-29 02:55:06.066270 | orchestrator | Sunday 29 March 2026 02:54:39 +0000 (0:00:08.371) 0:09:17.531 ********** 2026-03-29 02:55:06.066282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 02:55:06.066293 | orchestrator | 2026-03-29 02:55:06.066304 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-29 02:55:06.066315 | orchestrator | Sunday 29 March 2026 02:54:43 +0000 (0:00:04.403) 0:09:21.935 ********** 2026-03-29 02:55:06.066326 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:06.066338 | orchestrator | 2026-03-29 02:55:06.066349 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-29 02:55:06.066360 | orchestrator | Sunday 29 March 2026 02:54:44 +0000 (0:00:00.573) 0:09:22.509 ********** 2026-03-29 02:55:06.066370 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 02:55:06.066382 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 02:55:06.066392 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 02:55:06.066403 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-29 02:55:06.066414 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-29 02:55:06.066425 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-29 02:55:06.066436 | orchestrator | 2026-03-29 02:55:06.066447 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-29 02:55:06.066458 | orchestrator | Sunday 29 March 2026 02:54:45 +0000 (0:00:01.151) 0:09:23.660 ********** 2026-03-29 02:55:06.066469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:55:06.066480 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:55:06.066491 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:55:06.066502 | orchestrator | 2026-03-29 02:55:06.066513 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-29 02:55:06.066524 | orchestrator | Sunday 29 March 2026 02:54:47 +0000 (0:00:02.214) 0:09:25.875 ********** 2026-03-29 02:55:06.066536 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 02:55:06.066547 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:55:06.066558 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.066570 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 02:55:06.066590 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 02:55:06.066622 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.066634 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 02:55:06.066645 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 02:55:06.066656 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.066667 | orchestrator | 2026-03-29 02:55:06.066678 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-29 02:55:06.066690 | orchestrator | Sunday 29 March 2026 02:54:49 +0000 (0:00:01.575) 0:09:27.450 ********** 2026-03-29 02:55:06.066701 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.066711 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.066722 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.066733 | orchestrator | 2026-03-29 02:55:06.066744 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-29 02:55:06.066755 | orchestrator | Sunday 29 March 2026 02:54:51 +0000 (0:00:02.755) 0:09:30.206 ********** 2026-03-29 02:55:06.066766 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:06.066777 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:06.066788 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:06.066799 | orchestrator | 2026-03-29 02:55:06.066810 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-29 02:55:06.066821 | orchestrator | Sunday 29 March 2026 02:54:52 +0000 (0:00:00.367) 0:09:30.573 ********** 2026-03-29 02:55:06.066832 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:06.066843 | orchestrator | 2026-03-29 02:55:06.066854 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-29 02:55:06.066865 | orchestrator | Sunday 29 March 2026 02:54:52 +0000 (0:00:00.795) 0:09:31.368 ********** 2026-03-29 02:55:06.066876 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:06.066887 | orchestrator | 2026-03-29 02:55:06.066898 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-29 02:55:06.066909 | orchestrator | Sunday 29 March 2026 02:54:53 +0000 (0:00:00.569) 0:09:31.937 ********** 2026-03-29 02:55:06.066920 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.066937 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.066948 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.066959 | orchestrator | 2026-03-29 02:55:06.066970 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-29 02:55:06.066981 | orchestrator | Sunday 29 March 2026 02:54:54 +0000 (0:00:01.339) 0:09:33.276 ********** 2026-03-29 02:55:06.066992 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.067003 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.067014 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.067026 | orchestrator | 2026-03-29 02:55:06.067045 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-29 02:55:06.067085 | orchestrator | Sunday 29 March 2026 02:54:56 +0000 (0:00:01.585) 0:09:34.862 ********** 2026-03-29 02:55:06.067111 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.067133 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.067151 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.067169 | orchestrator | 2026-03-29 02:55:06.067186 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-29 02:55:06.067204 | orchestrator | Sunday 29 March 2026 02:54:58 +0000 (0:00:01.852) 0:09:36.714 ********** 2026-03-29 02:55:06.067220 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.067236 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.067255 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.067272 | orchestrator | 2026-03-29 02:55:06.067289 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-29 02:55:06.067308 | orchestrator | Sunday 29 March 2026 02:55:00 +0000 (0:00:01.972) 0:09:38.686 ********** 2026-03-29 02:55:06.067339 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:06.067358 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:06.067377 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:06.067396 | orchestrator | 2026-03-29 02:55:06.067415 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:55:06.067433 | orchestrator | Sunday 29 March 2026 02:55:01 +0000 (0:00:01.545) 0:09:40.232 ********** 2026-03-29 02:55:06.067451 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.067468 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.067486 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.067504 | orchestrator | 2026-03-29 02:55:06.067521 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 02:55:06.067571 | orchestrator | Sunday 29 March 2026 02:55:02 +0000 (0:00:00.702) 0:09:40.934 ********** 2026-03-29 02:55:06.067593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:06.067604 | orchestrator | 2026-03-29 02:55:06.067615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 02:55:06.067626 | orchestrator | Sunday 29 March 2026 02:55:03 +0000 (0:00:00.882) 0:09:41.817 ********** 2026-03-29 02:55:06.067641 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:06.067659 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:06.067677 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:06.067694 | orchestrator | 2026-03-29 02:55:06.067712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 02:55:06.067730 | orchestrator | Sunday 29 March 2026 02:55:03 +0000 (0:00:00.355) 0:09:42.172 ********** 2026-03-29 02:55:06.067749 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:06.067768 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:06.067784 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:06.067803 | orchestrator | 2026-03-29 02:55:06.067821 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 02:55:06.067841 | orchestrator | Sunday 29 March 2026 02:55:04 +0000 (0:00:01.237) 0:09:43.409 ********** 2026-03-29 02:55:06.067862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:55:06.067882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:55:06.067900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:55:06.067935 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886336 | orchestrator | 2026-03-29 02:55:25.886461 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 02:55:25.886469 | orchestrator | Sunday 29 March 2026 02:55:06 +0000 (0:00:01.056) 0:09:44.465 ********** 2026-03-29 02:55:25.886474 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886478 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886482 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886486 | orchestrator | 2026-03-29 02:55:25.886490 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 02:55:25.886494 | orchestrator | 2026-03-29 02:55:25.886498 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 02:55:25.886502 | orchestrator | Sunday 29 March 2026 02:55:07 +0000 (0:00:01.033) 0:09:45.498 ********** 2026-03-29 02:55:25.886506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:25.886510 | orchestrator | 2026-03-29 02:55:25.886514 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 02:55:25.886518 | orchestrator | Sunday 29 March 2026 02:55:07 +0000 (0:00:00.564) 0:09:46.063 ********** 2026-03-29 02:55:25.886522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:25.886525 | orchestrator | 2026-03-29 02:55:25.886529 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 02:55:25.886533 | orchestrator | Sunday 29 March 2026 02:55:08 +0000 (0:00:00.874) 0:09:46.937 ********** 2026-03-29 02:55:25.886558 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886565 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886571 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886577 | orchestrator | 2026-03-29 02:55:25.886583 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 02:55:25.886589 | orchestrator | Sunday 29 March 2026 02:55:08 +0000 (0:00:00.350) 0:09:47.288 ********** 2026-03-29 02:55:25.886595 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886601 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886606 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886612 | orchestrator | 2026-03-29 02:55:25.886632 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 02:55:25.886639 | orchestrator | Sunday 29 March 2026 02:55:09 +0000 (0:00:00.718) 0:09:48.006 ********** 2026-03-29 02:55:25.886645 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886650 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886657 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886662 | orchestrator | 2026-03-29 02:55:25.886668 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 02:55:25.886674 | orchestrator | Sunday 29 March 2026 02:55:10 +0000 (0:00:01.026) 0:09:49.033 ********** 2026-03-29 02:55:25.886680 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886686 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886692 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886698 | orchestrator | 2026-03-29 02:55:25.886704 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 02:55:25.886710 | orchestrator | Sunday 29 March 2026 02:55:11 +0000 (0:00:00.762) 0:09:49.796 ********** 2026-03-29 02:55:25.886716 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886722 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886728 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886734 | orchestrator | 2026-03-29 02:55:25.886741 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 02:55:25.886747 | orchestrator | Sunday 29 March 2026 02:55:11 +0000 (0:00:00.346) 0:09:50.142 ********** 2026-03-29 02:55:25.886753 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886759 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886765 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886771 | orchestrator | 2026-03-29 02:55:25.886777 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 02:55:25.886783 | orchestrator | Sunday 29 March 2026 02:55:12 +0000 (0:00:00.330) 0:09:50.473 ********** 2026-03-29 02:55:25.886789 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886795 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886801 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886806 | orchestrator | 2026-03-29 02:55:25.886812 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 02:55:25.886818 | orchestrator | Sunday 29 March 2026 02:55:12 +0000 (0:00:00.597) 0:09:51.071 ********** 2026-03-29 02:55:25.886824 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886830 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886836 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886842 | orchestrator | 2026-03-29 02:55:25.886847 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 02:55:25.886853 | orchestrator | Sunday 29 March 2026 02:55:13 +0000 (0:00:00.738) 0:09:51.809 ********** 2026-03-29 02:55:25.886859 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886865 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.886871 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.886877 | orchestrator | 2026-03-29 02:55:25.886884 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 02:55:25.886890 | orchestrator | Sunday 29 March 2026 02:55:14 +0000 (0:00:00.783) 0:09:52.593 ********** 2026-03-29 02:55:25.886896 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886909 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886915 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886921 | orchestrator | 2026-03-29 02:55:25.886927 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 02:55:25.886933 | orchestrator | Sunday 29 March 2026 02:55:14 +0000 (0:00:00.348) 0:09:52.942 ********** 2026-03-29 02:55:25.886940 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.886946 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.886952 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.886958 | orchestrator | 2026-03-29 02:55:25.886964 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 02:55:25.886970 | orchestrator | Sunday 29 March 2026 02:55:15 +0000 (0:00:00.666) 0:09:53.608 ********** 2026-03-29 02:55:25.886976 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.886995 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.887001 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.887008 | orchestrator | 2026-03-29 02:55:25.887014 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 02:55:25.887020 | orchestrator | Sunday 29 March 2026 02:55:15 +0000 (0:00:00.443) 0:09:54.051 ********** 2026-03-29 02:55:25.887026 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.887032 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.887038 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.887044 | orchestrator | 2026-03-29 02:55:25.887051 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 02:55:25.887057 | orchestrator | Sunday 29 March 2026 02:55:16 +0000 (0:00:00.381) 0:09:54.433 ********** 2026-03-29 02:55:25.887081 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.887087 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.887094 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.887100 | orchestrator | 2026-03-29 02:55:25.887107 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 02:55:25.887113 | orchestrator | Sunday 29 March 2026 02:55:16 +0000 (0:00:00.447) 0:09:54.880 ********** 2026-03-29 02:55:25.887120 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.887127 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.887133 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.887139 | orchestrator | 2026-03-29 02:55:25.887145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 02:55:25.887150 | orchestrator | Sunday 29 March 2026 02:55:17 +0000 (0:00:00.658) 0:09:55.539 ********** 2026-03-29 02:55:25.887156 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.887162 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.887168 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.887173 | orchestrator | 2026-03-29 02:55:25.887180 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 02:55:25.887186 | orchestrator | Sunday 29 March 2026 02:55:17 +0000 (0:00:00.327) 0:09:55.867 ********** 2026-03-29 02:55:25.887192 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.887197 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.887203 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.887208 | orchestrator | 2026-03-29 02:55:25.887219 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 02:55:25.887225 | orchestrator | Sunday 29 March 2026 02:55:17 +0000 (0:00:00.312) 0:09:56.179 ********** 2026-03-29 02:55:25.887230 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.887236 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.887242 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.887249 | orchestrator | 2026-03-29 02:55:25.887255 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 02:55:25.887261 | orchestrator | Sunday 29 March 2026 02:55:18 +0000 (0:00:00.386) 0:09:56.565 ********** 2026-03-29 02:55:25.887266 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:55:25.887272 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:55:25.887278 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:55:25.887289 | orchestrator | 2026-03-29 02:55:25.887295 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-29 02:55:25.887301 | orchestrator | Sunday 29 March 2026 02:55:19 +0000 (0:00:00.899) 0:09:57.465 ********** 2026-03-29 02:55:25.887309 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:25.887316 | orchestrator | 2026-03-29 02:55:25.887322 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 02:55:25.887327 | orchestrator | Sunday 29 March 2026 02:55:19 +0000 (0:00:00.640) 0:09:58.105 ********** 2026-03-29 02:55:25.887333 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:55:25.887340 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:55:25.887346 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:55:25.887352 | orchestrator | 2026-03-29 02:55:25.887357 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 02:55:25.887363 | orchestrator | Sunday 29 March 2026 02:55:22 +0000 (0:00:02.676) 0:10:00.782 ********** 2026-03-29 02:55:25.887369 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 02:55:25.887376 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 02:55:25.887382 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:55:25.887389 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 02:55:25.887395 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 02:55:25.887401 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:55:25.887407 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 02:55:25.887412 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 02:55:25.887418 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:55:25.887424 | orchestrator | 2026-03-29 02:55:25.887430 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-29 02:55:25.887436 | orchestrator | Sunday 29 March 2026 02:55:23 +0000 (0:00:01.305) 0:10:02.087 ********** 2026-03-29 02:55:25.887442 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:55:25.887448 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:55:25.887453 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:55:25.887459 | orchestrator | 2026-03-29 02:55:25.887465 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-29 02:55:25.887471 | orchestrator | Sunday 29 March 2026 02:55:24 +0000 (0:00:00.357) 0:10:02.445 ********** 2026-03-29 02:55:25.887476 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:55:25.887483 | orchestrator | 2026-03-29 02:55:25.887488 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-29 02:55:25.887494 | orchestrator | Sunday 29 March 2026 02:55:24 +0000 (0:00:00.900) 0:10:03.345 ********** 2026-03-29 02:55:25.887501 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 02:55:25.887516 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 02:56:18.043156 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 02:56:18.043261 | orchestrator | 2026-03-29 02:56:18.043273 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-29 02:56:18.043282 | orchestrator | Sunday 29 March 2026 02:55:25 +0000 (0:00:00.940) 0:10:04.285 ********** 2026-03-29 02:56:18.043288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043297 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 02:56:18.043326 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043335 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 02:56:18.043341 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043348 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 02:56:18.043354 | orchestrator | 2026-03-29 02:56:18.043361 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 02:56:18.043367 | orchestrator | Sunday 29 March 2026 02:55:30 +0000 (0:00:04.752) 0:10:09.038 ********** 2026-03-29 02:56:18.043374 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043382 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:56:18.043402 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043409 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:56:18.043416 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:56:18.043422 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:56:18.043428 | orchestrator | 2026-03-29 02:56:18.043435 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 02:56:18.043441 | orchestrator | Sunday 29 March 2026 02:55:33 +0000 (0:00:02.460) 0:10:11.498 ********** 2026-03-29 02:56:18.043448 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 02:56:18.043455 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:56:18.043462 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 02:56:18.043468 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:56:18.043474 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 02:56:18.043480 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:56:18.043486 | orchestrator | 2026-03-29 02:56:18.043493 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-29 02:56:18.043499 | orchestrator | Sunday 29 March 2026 02:55:34 +0000 (0:00:01.537) 0:10:13.036 ********** 2026-03-29 02:56:18.043506 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-29 02:56:18.043512 | orchestrator | 2026-03-29 02:56:18.043518 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-29 02:56:18.043524 | orchestrator | Sunday 29 March 2026 02:55:34 +0000 (0:00:00.277) 0:10:13.314 ********** 2026-03-29 02:56:18.043531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043563 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:18.043570 | orchestrator | 2026-03-29 02:56:18.043576 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-29 02:56:18.043583 | orchestrator | Sunday 29 March 2026 02:55:35 +0000 (0:00:00.638) 0:10:13.953 ********** 2026-03-29 02:56:18.043589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 02:56:18.043628 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:18.043634 | orchestrator | 2026-03-29 02:56:18.043654 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-29 02:56:18.043660 | orchestrator | Sunday 29 March 2026 02:55:36 +0000 (0:00:00.672) 0:10:14.625 ********** 2026-03-29 02:56:18.043667 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 02:56:18.043674 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 02:56:18.043680 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 02:56:18.043687 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 02:56:18.043693 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 02:56:18.043699 | orchestrator | 2026-03-29 02:56:18.043706 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-29 02:56:18.043712 | orchestrator | Sunday 29 March 2026 02:56:07 +0000 (0:00:31.153) 0:10:45.779 ********** 2026-03-29 02:56:18.043718 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:18.043725 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:18.043731 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:18.043737 | orchestrator | 2026-03-29 02:56:18.043746 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-29 02:56:18.043752 | orchestrator | Sunday 29 March 2026 02:56:07 +0000 (0:00:00.332) 0:10:46.111 ********** 2026-03-29 02:56:18.043758 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:18.043765 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:18.043771 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:18.043777 | orchestrator | 2026-03-29 02:56:18.043783 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-29 02:56:18.043790 | orchestrator | Sunday 29 March 2026 02:56:08 +0000 (0:00:00.572) 0:10:46.683 ********** 2026-03-29 02:56:18.043796 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:56:18.043803 | orchestrator | 2026-03-29 02:56:18.043809 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-29 02:56:18.043815 | orchestrator | Sunday 29 March 2026 02:56:08 +0000 (0:00:00.595) 0:10:47.279 ********** 2026-03-29 02:56:18.043821 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:56:18.043831 | orchestrator | 2026-03-29 02:56:18.043837 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-29 02:56:18.043844 | orchestrator | Sunday 29 March 2026 02:56:09 +0000 (0:00:00.796) 0:10:48.075 ********** 2026-03-29 02:56:18.043851 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:56:18.043857 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:56:18.043864 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:56:18.043900 | orchestrator | 2026-03-29 02:56:18.043907 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-29 02:56:18.043913 | orchestrator | Sunday 29 March 2026 02:56:11 +0000 (0:00:01.381) 0:10:49.456 ********** 2026-03-29 02:56:18.043919 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:56:18.043926 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:56:18.043932 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:56:18.043938 | orchestrator | 2026-03-29 02:56:18.043945 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-29 02:56:18.043951 | orchestrator | Sunday 29 March 2026 02:56:12 +0000 (0:00:01.229) 0:10:50.686 ********** 2026-03-29 02:56:18.043957 | orchestrator | changed: [testbed-node-4] 2026-03-29 02:56:18.043964 | orchestrator | changed: [testbed-node-3] 2026-03-29 02:56:18.043970 | orchestrator | changed: [testbed-node-5] 2026-03-29 02:56:18.043976 | orchestrator | 2026-03-29 02:56:18.043982 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-29 02:56:18.043988 | orchestrator | Sunday 29 March 2026 02:56:14 +0000 (0:00:01.781) 0:10:52.468 ********** 2026-03-29 02:56:18.043994 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 02:56:18.044000 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 02:56:18.044007 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 02:56:18.044014 | orchestrator | 2026-03-29 02:56:18.044020 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 02:56:18.044026 | orchestrator | Sunday 29 March 2026 02:56:16 +0000 (0:00:02.793) 0:10:55.262 ********** 2026-03-29 02:56:18.044032 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:18.044039 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:18.044045 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:18.044051 | orchestrator | 2026-03-29 02:56:18.044057 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 02:56:18.044063 | orchestrator | Sunday 29 March 2026 02:56:17 +0000 (0:00:00.339) 0:10:55.601 ********** 2026-03-29 02:56:18.044070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:56:18.044076 | orchestrator | 2026-03-29 02:56:18.044087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 02:56:20.622979 | orchestrator | Sunday 29 March 2026 02:56:18 +0000 (0:00:00.832) 0:10:56.434 ********** 2026-03-29 02:56:20.623111 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:20.623127 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:20.623142 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:20.623158 | orchestrator | 2026-03-29 02:56:20.623179 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 02:56:20.623201 | orchestrator | Sunday 29 March 2026 02:56:18 +0000 (0:00:00.333) 0:10:56.768 ********** 2026-03-29 02:56:20.623215 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:20.623231 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:20.623245 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:20.623258 | orchestrator | 2026-03-29 02:56:20.623272 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 02:56:20.623286 | orchestrator | Sunday 29 March 2026 02:56:18 +0000 (0:00:00.326) 0:10:57.094 ********** 2026-03-29 02:56:20.623301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:56:20.623317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:56:20.623330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:56:20.623344 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:20.623358 | orchestrator | 2026-03-29 02:56:20.623374 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 02:56:20.623432 | orchestrator | Sunday 29 March 2026 02:56:19 +0000 (0:00:01.164) 0:10:58.259 ********** 2026-03-29 02:56:20.623447 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:20.623463 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:20.623477 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:20.623485 | orchestrator | 2026-03-29 02:56:20.623494 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:56:20.623522 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-29 02:56:20.623534 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-29 02:56:20.623542 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-29 02:56:20.623551 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-29 02:56:20.623560 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-29 02:56:20.623568 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-29 02:56:20.623577 | orchestrator | 2026-03-29 02:56:20.623586 | orchestrator | 2026-03-29 02:56:20.623594 | orchestrator | 2026-03-29 02:56:20.623603 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:56:20.623612 | orchestrator | Sunday 29 March 2026 02:56:20 +0000 (0:00:00.260) 0:10:58.520 ********** 2026-03-29 02:56:20.623620 | orchestrator | =============================================================================== 2026-03-29 02:56:20.623629 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.02s 2026-03-29 02:56:20.623637 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.99s 2026-03-29 02:56:20.623646 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.33s 2026-03-29 02:56:20.623654 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.15s 2026-03-29 02:56:20.623663 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.15s 2026-03-29 02:56:20.623671 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.98s 2026-03-29 02:56:20.623680 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-03-29 02:56:20.623688 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.15s 2026-03-29 02:56:20.623697 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.41s 2026-03-29 02:56:20.623705 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.37s 2026-03-29 02:56:20.623713 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.72s 2026-03-29 02:56:20.623723 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.65s 2026-03-29 02:56:20.623732 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.03s 2026-03-29 02:56:20.623742 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.75s 2026-03-29 02:56:20.623752 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.40s 2026-03-29 02:56:20.623761 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.39s 2026-03-29 02:56:20.623771 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.33s 2026-03-29 02:56:20.623780 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2026-03-29 02:56:20.623790 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.39s 2026-03-29 02:56:20.623810 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.38s 2026-03-29 02:56:23.057484 | orchestrator | 2026-03-29 02:56:23 | INFO  | Task 502c65aa-27c4-4f63-9775-7aa8f22173ff (ceph-pools) was prepared for execution. 2026-03-29 02:56:23.057589 | orchestrator | 2026-03-29 02:56:23 | INFO  | It takes a moment until task 502c65aa-27c4-4f63-9775-7aa8f22173ff (ceph-pools) has been started and output is visible here. 2026-03-29 02:56:36.737778 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 02:56:36.737928 | orchestrator | 2.16.14 2026-03-29 02:56:36.737941 | orchestrator | 2026-03-29 02:56:36.737951 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-29 02:56:36.737960 | orchestrator | 2026-03-29 02:56:36.737969 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 02:56:36.737978 | orchestrator | Sunday 29 March 2026 02:56:27 +0000 (0:00:00.619) 0:00:00.619 ********** 2026-03-29 02:56:36.737986 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:56:36.737995 | orchestrator | 2026-03-29 02:56:36.738003 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 02:56:36.738055 | orchestrator | Sunday 29 March 2026 02:56:28 +0000 (0:00:00.678) 0:00:01.298 ********** 2026-03-29 02:56:36.738065 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738074 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738082 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738090 | orchestrator | 2026-03-29 02:56:36.738098 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 02:56:36.738107 | orchestrator | Sunday 29 March 2026 02:56:28 +0000 (0:00:00.646) 0:00:01.945 ********** 2026-03-29 02:56:36.738121 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738130 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738138 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738146 | orchestrator | 2026-03-29 02:56:36.738167 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 02:56:36.738176 | orchestrator | Sunday 29 March 2026 02:56:29 +0000 (0:00:00.297) 0:00:02.242 ********** 2026-03-29 02:56:36.738184 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738192 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738200 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738208 | orchestrator | 2026-03-29 02:56:36.738216 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 02:56:36.738224 | orchestrator | Sunday 29 March 2026 02:56:30 +0000 (0:00:00.903) 0:00:03.146 ********** 2026-03-29 02:56:36.738232 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738240 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738247 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738255 | orchestrator | 2026-03-29 02:56:36.738263 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 02:56:36.738271 | orchestrator | Sunday 29 March 2026 02:56:30 +0000 (0:00:00.316) 0:00:03.462 ********** 2026-03-29 02:56:36.738279 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738287 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738295 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738303 | orchestrator | 2026-03-29 02:56:36.738325 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 02:56:36.738335 | orchestrator | Sunday 29 March 2026 02:56:30 +0000 (0:00:00.310) 0:00:03.773 ********** 2026-03-29 02:56:36.738344 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738352 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738362 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738371 | orchestrator | 2026-03-29 02:56:36.738380 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 02:56:36.738389 | orchestrator | Sunday 29 March 2026 02:56:31 +0000 (0:00:00.329) 0:00:04.103 ********** 2026-03-29 02:56:36.738399 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:36.738430 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:36.738441 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:36.738449 | orchestrator | 2026-03-29 02:56:36.738458 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 02:56:36.738468 | orchestrator | Sunday 29 March 2026 02:56:31 +0000 (0:00:00.504) 0:00:04.607 ********** 2026-03-29 02:56:36.738477 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738486 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738495 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738503 | orchestrator | 2026-03-29 02:56:36.738511 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 02:56:36.738519 | orchestrator | Sunday 29 March 2026 02:56:31 +0000 (0:00:00.294) 0:00:04.901 ********** 2026-03-29 02:56:36.738526 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:56:36.738535 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:56:36.738542 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:56:36.738551 | orchestrator | 2026-03-29 02:56:36.738565 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 02:56:36.738577 | orchestrator | Sunday 29 March 2026 02:56:32 +0000 (0:00:00.614) 0:00:05.516 ********** 2026-03-29 02:56:36.738591 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:36.738603 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:36.738615 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:36.738628 | orchestrator | 2026-03-29 02:56:36.738641 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 02:56:36.738652 | orchestrator | Sunday 29 March 2026 02:56:32 +0000 (0:00:00.410) 0:00:05.927 ********** 2026-03-29 02:56:36.738665 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:56:36.738678 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:56:36.738690 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:56:36.738703 | orchestrator | 2026-03-29 02:56:36.738716 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 02:56:36.738730 | orchestrator | Sunday 29 March 2026 02:56:34 +0000 (0:00:02.016) 0:00:07.944 ********** 2026-03-29 02:56:36.738744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 02:56:36.738758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 02:56:36.738771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 02:56:36.738784 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:36.738798 | orchestrator | 2026-03-29 02:56:36.738906 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 02:56:36.738923 | orchestrator | Sunday 29 March 2026 02:56:35 +0000 (0:00:00.554) 0:00:08.498 ********** 2026-03-29 02:56:36.738939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.738954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.738969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.738983 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:36.738996 | orchestrator | 2026-03-29 02:56:36.739009 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 02:56:36.739044 | orchestrator | Sunday 29 March 2026 02:56:36 +0000 (0:00:00.855) 0:00:09.353 ********** 2026-03-29 02:56:36.739060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.739076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.739089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:36.739102 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:36.739116 | orchestrator | 2026-03-29 02:56:36.739129 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 02:56:36.739142 | orchestrator | Sunday 29 March 2026 02:56:36 +0000 (0:00:00.138) 0:00:09.492 ********** 2026-03-29 02:56:36.739156 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '76a3923fe123', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 02:56:33.706178', 'end': '2026-03-29 02:56:33.757835', 'delta': '0:00:00.051657', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['76a3923fe123'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 02:56:36.739173 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a6db66d8015c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 02:56:34.265169', 'end': '2026-03-29 02:56:34.299580', 'delta': '0:00:00.034411', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6db66d8015c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 02:56:36.739198 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5a2b09aac491', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 02:56:34.801609', 'end': '2026-03-29 02:56:34.849633', 'delta': '0:00:00.048024', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5a2b09aac491'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 02:56:43.565461 | orchestrator | 2026-03-29 02:56:43.565616 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 02:56:43.565641 | orchestrator | Sunday 29 March 2026 02:56:36 +0000 (0:00:00.193) 0:00:09.686 ********** 2026-03-29 02:56:43.565656 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:43.565670 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:43.565684 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:43.565697 | orchestrator | 2026-03-29 02:56:43.565729 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 02:56:43.565744 | orchestrator | Sunday 29 March 2026 02:56:37 +0000 (0:00:00.451) 0:00:10.137 ********** 2026-03-29 02:56:43.565758 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-29 02:56:43.565772 | orchestrator | 2026-03-29 02:56:43.565834 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 02:56:43.565849 | orchestrator | Sunday 29 March 2026 02:56:38 +0000 (0:00:01.728) 0:00:11.865 ********** 2026-03-29 02:56:43.565863 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.565876 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.565888 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.565902 | orchestrator | 2026-03-29 02:56:43.565916 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 02:56:43.565931 | orchestrator | Sunday 29 March 2026 02:56:39 +0000 (0:00:00.300) 0:00:12.166 ********** 2026-03-29 02:56:43.565944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.565958 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.565972 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.565986 | orchestrator | 2026-03-29 02:56:43.565998 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 02:56:43.566012 | orchestrator | Sunday 29 March 2026 02:56:39 +0000 (0:00:00.631) 0:00:12.797 ********** 2026-03-29 02:56:43.566084 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566101 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566117 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566132 | orchestrator | 2026-03-29 02:56:43.566147 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 02:56:43.566162 | orchestrator | Sunday 29 March 2026 02:56:40 +0000 (0:00:00.270) 0:00:13.067 ********** 2026-03-29 02:56:43.566175 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:43.566189 | orchestrator | 2026-03-29 02:56:43.566202 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 02:56:43.566215 | orchestrator | Sunday 29 March 2026 02:56:40 +0000 (0:00:00.126) 0:00:13.193 ********** 2026-03-29 02:56:43.566229 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566242 | orchestrator | 2026-03-29 02:56:43.566256 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 02:56:43.566270 | orchestrator | Sunday 29 March 2026 02:56:40 +0000 (0:00:00.234) 0:00:13.428 ********** 2026-03-29 02:56:43.566284 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566299 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566314 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566328 | orchestrator | 2026-03-29 02:56:43.566343 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 02:56:43.566358 | orchestrator | Sunday 29 March 2026 02:56:40 +0000 (0:00:00.282) 0:00:13.711 ********** 2026-03-29 02:56:43.566374 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566388 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566401 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566411 | orchestrator | 2026-03-29 02:56:43.566420 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 02:56:43.566429 | orchestrator | Sunday 29 March 2026 02:56:41 +0000 (0:00:00.492) 0:00:14.203 ********** 2026-03-29 02:56:43.566438 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566448 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566483 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566492 | orchestrator | 2026-03-29 02:56:43.566501 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 02:56:43.566510 | orchestrator | Sunday 29 March 2026 02:56:41 +0000 (0:00:00.339) 0:00:14.543 ********** 2026-03-29 02:56:43.566519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566529 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566538 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566547 | orchestrator | 2026-03-29 02:56:43.566557 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 02:56:43.566566 | orchestrator | Sunday 29 March 2026 02:56:41 +0000 (0:00:00.369) 0:00:14.913 ********** 2026-03-29 02:56:43.566575 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566584 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566593 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566602 | orchestrator | 2026-03-29 02:56:43.566612 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 02:56:43.566621 | orchestrator | Sunday 29 March 2026 02:56:42 +0000 (0:00:00.363) 0:00:15.276 ********** 2026-03-29 02:56:43.566630 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566639 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566648 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566657 | orchestrator | 2026-03-29 02:56:43.566666 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 02:56:43.566676 | orchestrator | Sunday 29 March 2026 02:56:42 +0000 (0:00:00.633) 0:00:15.909 ********** 2026-03-29 02:56:43.566685 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.566695 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.566704 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:43.566712 | orchestrator | 2026-03-29 02:56:43.566721 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 02:56:43.566731 | orchestrator | Sunday 29 March 2026 02:56:43 +0000 (0:00:00.374) 0:00:16.283 ********** 2026-03-29 02:56:43.566778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.566965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.676398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.676412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.676440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.676450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.676473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.676518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.874919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.874934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.874947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.874955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:43.874963 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:43.874971 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:43.874977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.874992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:43.875003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 02:56:44.147690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:44.147719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:44.147734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:44.147747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:44.147761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 02:56:44.147775 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:44.147852 | orchestrator | 2026-03-29 02:56:44.147867 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 02:56:44.147881 | orchestrator | Sunday 29 March 2026 02:56:43 +0000 (0:00:00.664) 0:00:16.948 ********** 2026-03-29 02:56:44.147913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.247694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386512 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.386566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503714 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:44.503744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503837 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503870 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:44.503895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.503929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.624591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.624895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.624927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.624977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625034 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:44.625177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:56.363431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:56.363555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-01-37-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 02:56:56.363571 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.363585 | orchestrator | 2026-03-29 02:56:56.363597 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 02:56:56.363610 | orchestrator | Sunday 29 March 2026 02:56:44 +0000 (0:00:00.771) 0:00:17.719 ********** 2026-03-29 02:56:56.363629 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:56.363642 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:56.363653 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:56.363663 | orchestrator | 2026-03-29 02:56:56.363672 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 02:56:56.363679 | orchestrator | Sunday 29 March 2026 02:56:45 +0000 (0:00:01.113) 0:00:18.833 ********** 2026-03-29 02:56:56.363686 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:56.363696 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:56.363706 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:56.363716 | orchestrator | 2026-03-29 02:56:56.363725 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 02:56:56.363825 | orchestrator | Sunday 29 March 2026 02:56:46 +0000 (0:00:00.363) 0:00:19.196 ********** 2026-03-29 02:56:56.363842 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:56.363849 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:56.363855 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:56.363861 | orchestrator | 2026-03-29 02:56:56.363867 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 02:56:56.363874 | orchestrator | Sunday 29 March 2026 02:56:46 +0000 (0:00:00.739) 0:00:19.936 ********** 2026-03-29 02:56:56.363880 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.363886 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.363892 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.363898 | orchestrator | 2026-03-29 02:56:56.363904 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 02:56:56.363910 | orchestrator | Sunday 29 March 2026 02:56:47 +0000 (0:00:00.326) 0:00:20.262 ********** 2026-03-29 02:56:56.363917 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.363923 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.363929 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.363936 | orchestrator | 2026-03-29 02:56:56.363947 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 02:56:56.363957 | orchestrator | Sunday 29 March 2026 02:56:48 +0000 (0:00:00.879) 0:00:21.142 ********** 2026-03-29 02:56:56.363967 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.363978 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.363989 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.363999 | orchestrator | 2026-03-29 02:56:56.364009 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 02:56:56.364020 | orchestrator | Sunday 29 March 2026 02:56:48 +0000 (0:00:00.353) 0:00:21.496 ********** 2026-03-29 02:56:56.364029 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 02:56:56.364040 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 02:56:56.364050 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 02:56:56.364060 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 02:56:56.364080 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 02:56:56.364091 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 02:56:56.364102 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 02:56:56.364112 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 02:56:56.364124 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 02:56:56.364131 | orchestrator | 2026-03-29 02:56:56.364138 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 02:56:56.364144 | orchestrator | Sunday 29 March 2026 02:56:49 +0000 (0:00:01.231) 0:00:22.728 ********** 2026-03-29 02:56:56.364165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 02:56:56.364173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 02:56:56.364179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 02:56:56.364185 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 02:56:56.364197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 02:56:56.364203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 02:56:56.364209 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.364215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 02:56:56.364221 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 02:56:56.364227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 02:56:56.364233 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.364239 | orchestrator | 2026-03-29 02:56:56.364245 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 02:56:56.364251 | orchestrator | Sunday 29 March 2026 02:56:50 +0000 (0:00:00.397) 0:00:23.125 ********** 2026-03-29 02:56:56.364258 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 02:56:56.364264 | orchestrator | 2026-03-29 02:56:56.364271 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 02:56:56.364278 | orchestrator | Sunday 29 March 2026 02:56:51 +0000 (0:00:00.890) 0:00:24.016 ********** 2026-03-29 02:56:56.364284 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364291 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.364297 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.364303 | orchestrator | 2026-03-29 02:56:56.364309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 02:56:56.364315 | orchestrator | Sunday 29 March 2026 02:56:51 +0000 (0:00:00.381) 0:00:24.398 ********** 2026-03-29 02:56:56.364321 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364327 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.364333 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.364339 | orchestrator | 2026-03-29 02:56:56.364345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 02:56:56.364351 | orchestrator | Sunday 29 March 2026 02:56:51 +0000 (0:00:00.352) 0:00:24.751 ********** 2026-03-29 02:56:56.364357 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364363 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:56:56.364369 | orchestrator | skipping: [testbed-node-5] 2026-03-29 02:56:56.364375 | orchestrator | 2026-03-29 02:56:56.364382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 02:56:56.364388 | orchestrator | Sunday 29 March 2026 02:56:52 +0000 (0:00:00.707) 0:00:25.459 ********** 2026-03-29 02:56:56.364394 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:56.364400 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:56.364406 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:56.364412 | orchestrator | 2026-03-29 02:56:56.364418 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 02:56:56.364434 | orchestrator | Sunday 29 March 2026 02:56:52 +0000 (0:00:00.428) 0:00:25.887 ********** 2026-03-29 02:56:56.364441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:56:56.364447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:56:56.364453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:56:56.364459 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364465 | orchestrator | 2026-03-29 02:56:56.364471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 02:56:56.364478 | orchestrator | Sunday 29 March 2026 02:56:53 +0000 (0:00:00.410) 0:00:26.298 ********** 2026-03-29 02:56:56.364484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:56:56.364490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:56:56.364496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:56:56.364502 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364508 | orchestrator | 2026-03-29 02:56:56.364514 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 02:56:56.364521 | orchestrator | Sunday 29 March 2026 02:56:53 +0000 (0:00:00.396) 0:00:26.695 ********** 2026-03-29 02:56:56.364527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 02:56:56.364533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 02:56:56.364539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 02:56:56.364545 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:56:56.364551 | orchestrator | 2026-03-29 02:56:56.364557 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 02:56:56.364563 | orchestrator | Sunday 29 March 2026 02:56:54 +0000 (0:00:00.411) 0:00:27.106 ********** 2026-03-29 02:56:56.364570 | orchestrator | ok: [testbed-node-3] 2026-03-29 02:56:56.364576 | orchestrator | ok: [testbed-node-4] 2026-03-29 02:56:56.364582 | orchestrator | ok: [testbed-node-5] 2026-03-29 02:56:56.364588 | orchestrator | 2026-03-29 02:56:56.364594 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 02:56:56.364600 | orchestrator | Sunday 29 March 2026 02:56:54 +0000 (0:00:00.362) 0:00:27.469 ********** 2026-03-29 02:56:56.364606 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 02:56:56.364613 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 02:56:56.364619 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 02:56:56.364625 | orchestrator | 2026-03-29 02:56:56.364631 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 02:56:56.364637 | orchestrator | Sunday 29 March 2026 02:56:55 +0000 (0:00:00.969) 0:00:28.439 ********** 2026-03-29 02:56:56.364643 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:56:56.364654 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:58:38.505589 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:58:38.505700 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 02:58:38.505715 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 02:58:38.505726 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 02:58:38.505737 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 02:58:38.505748 | orchestrator | 2026-03-29 02:58:38.505759 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 02:58:38.505771 | orchestrator | Sunday 29 March 2026 02:56:56 +0000 (0:00:00.869) 0:00:29.308 ********** 2026-03-29 02:58:38.505781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 02:58:38.505792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 02:58:38.505825 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 02:58:38.505836 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 02:58:38.505846 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 02:58:38.505856 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 02:58:38.505866 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 02:58:38.505876 | orchestrator | 2026-03-29 02:58:38.505886 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-29 02:58:38.505896 | orchestrator | Sunday 29 March 2026 02:56:58 +0000 (0:00:01.846) 0:00:31.154 ********** 2026-03-29 02:58:38.505906 | orchestrator | skipping: [testbed-node-3] 2026-03-29 02:58:38.505918 | orchestrator | skipping: [testbed-node-4] 2026-03-29 02:58:38.505928 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-29 02:58:38.505938 | orchestrator | 2026-03-29 02:58:38.505948 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-29 02:58:38.505957 | orchestrator | Sunday 29 March 2026 02:56:58 +0000 (0:00:00.655) 0:00:31.810 ********** 2026-03-29 02:58:38.505969 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:58:38.505996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:58:38.506006 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:58:38.506070 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:58:38.506082 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 02:58:38.506092 | orchestrator | 2026-03-29 02:58:38.506102 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-29 02:58:38.506113 | orchestrator | Sunday 29 March 2026 02:57:42 +0000 (0:00:43.346) 0:01:15.157 ********** 2026-03-29 02:58:38.506123 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506141 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506149 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506158 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506168 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506178 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-29 02:58:38.506188 | orchestrator | 2026-03-29 02:58:38.506197 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-29 02:58:38.506217 | orchestrator | Sunday 29 March 2026 02:58:07 +0000 (0:00:25.736) 0:01:40.893 ********** 2026-03-29 02:58:38.506244 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506255 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506264 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506274 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506284 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506293 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506303 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 02:58:38.506314 | orchestrator | 2026-03-29 02:58:38.506324 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-29 02:58:38.506335 | orchestrator | Sunday 29 March 2026 02:58:20 +0000 (0:00:12.109) 0:01:53.003 ********** 2026-03-29 02:58:38.506346 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506356 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506366 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506376 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506385 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506393 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506411 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506419 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506428 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506481 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506491 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506500 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506509 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506518 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 02:58:38.506535 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 02:58:38.506552 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 02:58:38.506562 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-29 02:58:38.506572 | orchestrator | 2026-03-29 02:58:38.506581 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:58:38.506590 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 02:58:38.506602 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 02:58:38.506612 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 02:58:38.506621 | orchestrator | 2026-03-29 02:58:38.506630 | orchestrator | 2026-03-29 02:58:38.506645 | orchestrator | 2026-03-29 02:58:38.506653 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:58:38.506663 | orchestrator | Sunday 29 March 2026 02:58:38 +0000 (0:00:18.432) 0:02:11.435 ********** 2026-03-29 02:58:38.506673 | orchestrator | =============================================================================== 2026-03-29 02:58:38.506684 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.35s 2026-03-29 02:58:38.506694 | orchestrator | generate keys ---------------------------------------------------------- 25.74s 2026-03-29 02:58:38.506703 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.43s 2026-03-29 02:58:38.506713 | orchestrator | get keys from monitors ------------------------------------------------- 12.11s 2026-03-29 02:58:38.506724 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.02s 2026-03-29 02:58:38.506735 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.85s 2026-03-29 02:58:38.506745 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.73s 2026-03-29 02:58:38.506755 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.23s 2026-03-29 02:58:38.506765 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 1.11s 2026-03-29 02:58:38.506775 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.97s 2026-03-29 02:58:38.506785 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.90s 2026-03-29 02:58:38.506795 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.89s 2026-03-29 02:58:38.506805 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.88s 2026-03-29 02:58:38.506823 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-03-29 02:58:38.878574 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.86s 2026-03-29 02:58:38.878681 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.77s 2026-03-29 02:58:38.878697 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.74s 2026-03-29 02:58:38.878709 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6 ---- 0.71s 2026-03-29 02:58:38.878722 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-03-29 02:58:38.878742 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2026-03-29 02:58:41.183707 | orchestrator | 2026-03-29 02:58:41 | INFO  | Task 1f2c3b6e-59bd-4e13-8a27-2e999a9020c1 (copy-ceph-keys) was prepared for execution. 2026-03-29 02:58:41.183791 | orchestrator | 2026-03-29 02:58:41 | INFO  | It takes a moment until task 1f2c3b6e-59bd-4e13-8a27-2e999a9020c1 (copy-ceph-keys) has been started and output is visible here. 2026-03-29 02:59:18.961488 | orchestrator | 2026-03-29 02:59:18.961582 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-29 02:59:18.961593 | orchestrator | 2026-03-29 02:59:18.961601 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-29 02:59:18.961607 | orchestrator | Sunday 29 March 2026 02:58:45 +0000 (0:00:00.159) 0:00:00.159 ********** 2026-03-29 02:59:18.961614 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 02:59:18.961621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961633 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 02:59:18.961639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961645 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 02:59:18.961671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 02:59:18.961677 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 02:59:18.961687 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 02:59:18.961697 | orchestrator | 2026-03-29 02:59:18.961706 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-29 02:59:18.961730 | orchestrator | Sunday 29 March 2026 02:58:49 +0000 (0:00:04.703) 0:00:04.863 ********** 2026-03-29 02:59:18.961741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 02:59:18.961751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961762 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961771 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 02:59:18.961781 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 02:59:18.961802 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 02:59:18.961811 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 02:59:18.961817 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 02:59:18.961823 | orchestrator | 2026-03-29 02:59:18.961828 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-29 02:59:18.961834 | orchestrator | Sunday 29 March 2026 02:58:54 +0000 (0:00:04.396) 0:00:09.259 ********** 2026-03-29 02:59:18.961840 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 02:59:18.961846 | orchestrator | 2026-03-29 02:59:18.961852 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-29 02:59:18.961858 | orchestrator | Sunday 29 March 2026 02:58:55 +0000 (0:00:00.940) 0:00:10.199 ********** 2026-03-29 02:59:18.961864 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-29 02:59:18.961870 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961876 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961882 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 02:59:18.961887 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.961893 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-29 02:59:18.961899 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-29 02:59:18.961904 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-29 02:59:18.961910 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-29 02:59:18.961915 | orchestrator | 2026-03-29 02:59:18.961921 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-29 02:59:18.961928 | orchestrator | Sunday 29 March 2026 02:59:08 +0000 (0:00:13.231) 0:00:23.431 ********** 2026-03-29 02:59:18.961938 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-29 02:59:18.961947 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-29 02:59:18.961957 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 02:59:18.961977 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 02:59:18.962007 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 02:59:18.962072 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 02:59:18.962079 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-29 02:59:18.962087 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-29 02:59:18.962093 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-29 02:59:18.962100 | orchestrator | 2026-03-29 02:59:18.962107 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-29 02:59:18.962113 | orchestrator | Sunday 29 March 2026 02:59:11 +0000 (0:00:03.078) 0:00:26.509 ********** 2026-03-29 02:59:18.962121 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-29 02:59:18.962128 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.962138 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.962148 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 02:59:18.962158 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 02:59:18.962168 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-29 02:59:18.962178 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-29 02:59:18.962189 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-29 02:59:18.962206 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-29 02:59:18.962216 | orchestrator | 2026-03-29 02:59:18.962233 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 02:59:18.962244 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 02:59:18.962256 | orchestrator | 2026-03-29 02:59:18.962266 | orchestrator | 2026-03-29 02:59:18.962277 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 02:59:18.962288 | orchestrator | Sunday 29 March 2026 02:59:18 +0000 (0:00:07.052) 0:00:33.561 ********** 2026-03-29 02:59:18.962299 | orchestrator | =============================================================================== 2026-03-29 02:59:18.962309 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.23s 2026-03-29 02:59:18.962319 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.05s 2026-03-29 02:59:18.962347 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.70s 2026-03-29 02:59:18.962358 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.40s 2026-03-29 02:59:18.962364 | orchestrator | Check if target directories exist --------------------------------------- 3.08s 2026-03-29 02:59:18.962370 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2026-03-29 02:59:31.458101 | orchestrator | 2026-03-29 02:59:31 | INFO  | Task 6c47eda5-a028-4c4a-809d-ece56b9dacd7 (cephclient) was prepared for execution. 2026-03-29 02:59:31.458206 | orchestrator | 2026-03-29 02:59:31 | INFO  | It takes a moment until task 6c47eda5-a028-4c4a-809d-ece56b9dacd7 (cephclient) has been started and output is visible here. 2026-03-29 03:00:33.902310 | orchestrator | 2026-03-29 03:00:33.902423 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-29 03:00:33.902440 | orchestrator | 2026-03-29 03:00:33.902453 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-29 03:00:33.902465 | orchestrator | Sunday 29 March 2026 02:59:35 +0000 (0:00:00.246) 0:00:00.246 ********** 2026-03-29 03:00:33.902502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-29 03:00:33.902516 | orchestrator | 2026-03-29 03:00:33.902527 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-29 03:00:33.902538 | orchestrator | Sunday 29 March 2026 02:59:35 +0000 (0:00:00.235) 0:00:00.482 ********** 2026-03-29 03:00:33.902550 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-29 03:00:33.902561 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-29 03:00:33.902572 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-29 03:00:33.902584 | orchestrator | 2026-03-29 03:00:33.902595 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-29 03:00:33.902605 | orchestrator | Sunday 29 March 2026 02:59:37 +0000 (0:00:01.236) 0:00:01.719 ********** 2026-03-29 03:00:33.902617 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-29 03:00:33.902628 | orchestrator | 2026-03-29 03:00:33.902639 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-29 03:00:33.902650 | orchestrator | Sunday 29 March 2026 02:59:38 +0000 (0:00:01.477) 0:00:03.197 ********** 2026-03-29 03:00:33.902661 | orchestrator | changed: [testbed-manager] 2026-03-29 03:00:33.902672 | orchestrator | 2026-03-29 03:00:33.902683 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-29 03:00:33.902694 | orchestrator | Sunday 29 March 2026 02:59:39 +0000 (0:00:00.923) 0:00:04.120 ********** 2026-03-29 03:00:33.902705 | orchestrator | changed: [testbed-manager] 2026-03-29 03:00:33.902715 | orchestrator | 2026-03-29 03:00:33.902726 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-29 03:00:33.902737 | orchestrator | Sunday 29 March 2026 02:59:40 +0000 (0:00:00.928) 0:00:05.048 ********** 2026-03-29 03:00:33.902748 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-29 03:00:33.902758 | orchestrator | ok: [testbed-manager] 2026-03-29 03:00:33.902769 | orchestrator | 2026-03-29 03:00:33.902780 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-29 03:00:33.902790 | orchestrator | Sunday 29 March 2026 03:00:23 +0000 (0:00:43.185) 0:00:48.234 ********** 2026-03-29 03:00:33.902804 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-29 03:00:33.902817 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-29 03:00:33.902831 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-29 03:00:33.902844 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-29 03:00:33.902857 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-29 03:00:33.902870 | orchestrator | 2026-03-29 03:00:33.902882 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-29 03:00:33.902896 | orchestrator | Sunday 29 March 2026 03:00:27 +0000 (0:00:04.222) 0:00:52.457 ********** 2026-03-29 03:00:33.902909 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-29 03:00:33.902922 | orchestrator | 2026-03-29 03:00:33.902939 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-29 03:00:33.902958 | orchestrator | Sunday 29 March 2026 03:00:28 +0000 (0:00:00.479) 0:00:52.936 ********** 2026-03-29 03:00:33.902971 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:00:33.902982 | orchestrator | 2026-03-29 03:00:33.902992 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-29 03:00:33.903003 | orchestrator | Sunday 29 March 2026 03:00:28 +0000 (0:00:00.159) 0:00:53.095 ********** 2026-03-29 03:00:33.903028 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:00:33.903039 | orchestrator | 2026-03-29 03:00:33.903055 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-29 03:00:33.903074 | orchestrator | Sunday 29 March 2026 03:00:29 +0000 (0:00:00.515) 0:00:53.611 ********** 2026-03-29 03:00:33.903108 | orchestrator | changed: [testbed-manager] 2026-03-29 03:00:33.903127 | orchestrator | 2026-03-29 03:00:33.903144 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-29 03:00:33.903191 | orchestrator | Sunday 29 March 2026 03:00:30 +0000 (0:00:01.678) 0:00:55.290 ********** 2026-03-29 03:00:33.903249 | orchestrator | changed: [testbed-manager] 2026-03-29 03:00:33.903281 | orchestrator | 2026-03-29 03:00:33.903300 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-29 03:00:33.903319 | orchestrator | Sunday 29 March 2026 03:00:31 +0000 (0:00:00.684) 0:00:55.974 ********** 2026-03-29 03:00:33.903337 | orchestrator | changed: [testbed-manager] 2026-03-29 03:00:33.903356 | orchestrator | 2026-03-29 03:00:33.903374 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-29 03:00:33.903391 | orchestrator | Sunday 29 March 2026 03:00:32 +0000 (0:00:00.618) 0:00:56.593 ********** 2026-03-29 03:00:33.903410 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-29 03:00:33.903427 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-29 03:00:33.903446 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-29 03:00:33.903465 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-29 03:00:33.903485 | orchestrator | 2026-03-29 03:00:33.903503 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:00:33.903523 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:00:33.903541 | orchestrator | 2026-03-29 03:00:33.903560 | orchestrator | 2026-03-29 03:00:33.903593 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:00:33.903605 | orchestrator | Sunday 29 March 2026 03:00:33 +0000 (0:00:01.451) 0:00:58.044 ********** 2026-03-29 03:00:33.903615 | orchestrator | =============================================================================== 2026-03-29 03:00:33.903626 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.19s 2026-03-29 03:00:33.903637 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.22s 2026-03-29 03:00:33.903647 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2026-03-29 03:00:33.903658 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.48s 2026-03-29 03:00:33.903669 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2026-03-29 03:00:33.903679 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.24s 2026-03-29 03:00:33.903690 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-03-29 03:00:33.903701 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2026-03-29 03:00:33.903711 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.68s 2026-03-29 03:00:33.903722 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-03-29 03:00:33.903733 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2026-03-29 03:00:33.903743 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-03-29 03:00:33.903754 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-03-29 03:00:33.903765 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-03-29 03:00:36.188887 | orchestrator | 2026-03-29 03:00:36 | INFO  | Task 01a5788a-8a16-46be-aaf9-d41b3b5307ee (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-29 03:00:36.188988 | orchestrator | 2026-03-29 03:00:36 | INFO  | It takes a moment until task 01a5788a-8a16-46be-aaf9-d41b3b5307ee (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-29 03:02:13.278886 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 03:02:13.279060 | orchestrator | 2.16.14 2026-03-29 03:02:13.279076 | orchestrator | 2026-03-29 03:02:13.279084 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-29 03:02:13.279091 | orchestrator | 2026-03-29 03:02:13.279098 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-29 03:02:13.279105 | orchestrator | Sunday 29 March 2026 03:00:40 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-29 03:02:13.279112 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279119 | orchestrator | 2026-03-29 03:02:13.279125 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-29 03:02:13.279132 | orchestrator | Sunday 29 March 2026 03:00:42 +0000 (0:00:02.278) 0:00:02.548 ********** 2026-03-29 03:02:13.279138 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279144 | orchestrator | 2026-03-29 03:02:13.279150 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-29 03:02:13.279156 | orchestrator | Sunday 29 March 2026 03:00:44 +0000 (0:00:01.075) 0:00:03.623 ********** 2026-03-29 03:02:13.279163 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279169 | orchestrator | 2026-03-29 03:02:13.279175 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-29 03:02:13.279181 | orchestrator | Sunday 29 March 2026 03:00:45 +0000 (0:00:01.055) 0:00:04.679 ********** 2026-03-29 03:02:13.279187 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279193 | orchestrator | 2026-03-29 03:02:13.279199 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-29 03:02:13.279217 | orchestrator | Sunday 29 March 2026 03:00:46 +0000 (0:00:01.186) 0:00:05.865 ********** 2026-03-29 03:02:13.279223 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279229 | orchestrator | 2026-03-29 03:02:13.279235 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-29 03:02:13.279241 | orchestrator | Sunday 29 March 2026 03:00:47 +0000 (0:00:01.077) 0:00:06.943 ********** 2026-03-29 03:02:13.279248 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279254 | orchestrator | 2026-03-29 03:02:13.279260 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-29 03:02:13.279266 | orchestrator | Sunday 29 March 2026 03:00:48 +0000 (0:00:01.039) 0:00:07.983 ********** 2026-03-29 03:02:13.279272 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279279 | orchestrator | 2026-03-29 03:02:13.279285 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-29 03:02:13.279291 | orchestrator | Sunday 29 March 2026 03:00:50 +0000 (0:00:02.122) 0:00:10.105 ********** 2026-03-29 03:02:13.279297 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279303 | orchestrator | 2026-03-29 03:02:13.279309 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-29 03:02:13.279315 | orchestrator | Sunday 29 March 2026 03:00:51 +0000 (0:00:01.135) 0:00:11.241 ********** 2026-03-29 03:02:13.279321 | orchestrator | changed: [testbed-manager] 2026-03-29 03:02:13.279327 | orchestrator | 2026-03-29 03:02:13.279333 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-29 03:02:13.279340 | orchestrator | Sunday 29 March 2026 03:01:48 +0000 (0:00:56.776) 0:01:08.017 ********** 2026-03-29 03:02:13.279346 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:02:13.279352 | orchestrator | 2026-03-29 03:02:13.279358 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 03:02:13.279364 | orchestrator | 2026-03-29 03:02:13.279370 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 03:02:13.279376 | orchestrator | Sunday 29 March 2026 03:01:48 +0000 (0:00:00.197) 0:01:08.215 ********** 2026-03-29 03:02:13.279383 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:02:13.279389 | orchestrator | 2026-03-29 03:02:13.279395 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 03:02:13.279401 | orchestrator | 2026-03-29 03:02:13.279407 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 03:02:13.279418 | orchestrator | Sunday 29 March 2026 03:02:00 +0000 (0:00:11.702) 0:01:19.917 ********** 2026-03-29 03:02:13.279425 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:02:13.279432 | orchestrator | 2026-03-29 03:02:13.279440 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 03:02:13.279447 | orchestrator | 2026-03-29 03:02:13.279455 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 03:02:13.279462 | orchestrator | Sunday 29 March 2026 03:02:11 +0000 (0:00:11.259) 0:01:31.177 ********** 2026-03-29 03:02:13.279470 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:02:13.279477 | orchestrator | 2026-03-29 03:02:13.279484 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:02:13.279493 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 03:02:13.279501 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:02:13.279509 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:02:13.279517 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:02:13.279524 | orchestrator | 2026-03-29 03:02:13.279531 | orchestrator | 2026-03-29 03:02:13.279538 | orchestrator | 2026-03-29 03:02:13.279546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:02:13.279553 | orchestrator | Sunday 29 March 2026 03:02:12 +0000 (0:00:01.338) 0:01:32.516 ********** 2026-03-29 03:02:13.279560 | orchestrator | =============================================================================== 2026-03-29 03:02:13.279568 | orchestrator | Create admin user ------------------------------------------------------ 56.78s 2026-03-29 03:02:13.279588 | orchestrator | Restart ceph manager service ------------------------------------------- 24.30s 2026-03-29 03:02:13.279595 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.28s 2026-03-29 03:02:13.279603 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.12s 2026-03-29 03:02:13.279610 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.19s 2026-03-29 03:02:13.279617 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.14s 2026-03-29 03:02:13.279624 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.08s 2026-03-29 03:02:13.279632 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.08s 2026-03-29 03:02:13.279639 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.06s 2026-03-29 03:02:13.279646 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2026-03-29 03:02:13.279653 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-03-29 03:02:13.586368 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-29 03:02:15.639585 | orchestrator | 2026-03-29 03:02:15 | INFO  | Task a36fca9a-d086-431b-8a6c-5d64a95baaef (keystone) was prepared for execution. 2026-03-29 03:02:15.639704 | orchestrator | 2026-03-29 03:02:15 | INFO  | It takes a moment until task a36fca9a-d086-431b-8a6c-5d64a95baaef (keystone) has been started and output is visible here. 2026-03-29 03:02:23.026367 | orchestrator | 2026-03-29 03:02:23.026464 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:02:23.026476 | orchestrator | 2026-03-29 03:02:23.026483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:02:23.026490 | orchestrator | Sunday 29 March 2026 03:02:19 +0000 (0:00:00.255) 0:00:00.255 ********** 2026-03-29 03:02:23.026497 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:02:23.026504 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:02:23.026528 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:02:23.026534 | orchestrator | 2026-03-29 03:02:23.026541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:02:23.026547 | orchestrator | Sunday 29 March 2026 03:02:20 +0000 (0:00:00.317) 0:00:00.573 ********** 2026-03-29 03:02:23.026554 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-29 03:02:23.026561 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-29 03:02:23.026577 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-29 03:02:23.026584 | orchestrator | 2026-03-29 03:02:23.026590 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-29 03:02:23.026596 | orchestrator | 2026-03-29 03:02:23.026603 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:02:23.026609 | orchestrator | Sunday 29 March 2026 03:02:20 +0000 (0:00:00.427) 0:00:01.001 ********** 2026-03-29 03:02:23.026616 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:02:23.026623 | orchestrator | 2026-03-29 03:02:23.026630 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-29 03:02:23.026636 | orchestrator | Sunday 29 March 2026 03:02:21 +0000 (0:00:00.595) 0:00:01.597 ********** 2026-03-29 03:02:23.026648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:23.026657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:23.026691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:23.026707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:23.026755 | orchestrator | 2026-03-29 03:02:23.026766 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-29 03:02:23.026794 | orchestrator | Sunday 29 March 2026 03:02:23 +0000 (0:00:01.849) 0:00:03.447 ********** 2026-03-29 03:02:29.114546 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:29.114637 | orchestrator | 2026-03-29 03:02:29.114648 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-29 03:02:29.114656 | orchestrator | Sunday 29 March 2026 03:02:23 +0000 (0:00:00.285) 0:00:03.732 ********** 2026-03-29 03:02:29.114663 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:29.114670 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:29.114677 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:29.114683 | orchestrator | 2026-03-29 03:02:29.114689 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-29 03:02:29.114695 | orchestrator | Sunday 29 March 2026 03:02:23 +0000 (0:00:00.295) 0:00:04.028 ********** 2026-03-29 03:02:29.114702 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:02:29.114709 | orchestrator | 2026-03-29 03:02:29.114716 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:02:29.114723 | orchestrator | Sunday 29 March 2026 03:02:24 +0000 (0:00:00.803) 0:00:04.831 ********** 2026-03-29 03:02:29.114730 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:02:29.114736 | orchestrator | 2026-03-29 03:02:29.114743 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-29 03:02:29.114750 | orchestrator | Sunday 29 March 2026 03:02:24 +0000 (0:00:00.579) 0:00:05.411 ********** 2026-03-29 03:02:29.114761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:29.114770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:29.114809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:29.114847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:29.114894 | orchestrator | 2026-03-29 03:02:29.114900 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-29 03:02:29.114907 | orchestrator | Sunday 29 March 2026 03:02:28 +0000 (0:00:03.579) 0:00:08.991 ********** 2026-03-29 03:02:29.114919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:29.911503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:29.911574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:29.911581 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:29.911589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:29.911607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:29.911614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:29.911618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:29.911646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:29.911651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:29.911655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:29.911662 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:29.911666 | orchestrator | 2026-03-29 03:02:29.911677 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-29 03:02:29.911683 | orchestrator | Sunday 29 March 2026 03:02:29 +0000 (0:00:00.551) 0:00:09.542 ********** 2026-03-29 03:02:29.911687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:29.911696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:29.911709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:33.482263 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:33.482376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:33.482416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:33.483228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:33.483319 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:33.483360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:33.483378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:33.483415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:33.483429 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:33.483441 | orchestrator | 2026-03-29 03:02:33.483455 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-29 03:02:33.483468 | orchestrator | Sunday 29 March 2026 03:02:29 +0000 (0:00:00.798) 0:00:10.341 ********** 2026-03-29 03:02:33.483503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:33.483518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:33.483541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:33.483567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:38.186472 | orchestrator | 2026-03-29 03:02:38.186481 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-29 03:02:38.186489 | orchestrator | Sunday 29 March 2026 03:02:33 +0000 (0:00:03.564) 0:00:13.905 ********** 2026-03-29 03:02:38.186514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:38.186531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:38.186539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:38.186547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:38.186557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:02:38.186570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:41.837431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:41.837577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:41.837595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:02:41.837606 | orchestrator | 2026-03-29 03:02:41.837618 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-29 03:02:41.837629 | orchestrator | Sunday 29 March 2026 03:02:38 +0000 (0:00:04.700) 0:00:18.606 ********** 2026-03-29 03:02:41.837639 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:02:41.837651 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:02:41.837660 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:02:41.837669 | orchestrator | 2026-03-29 03:02:41.837679 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-29 03:02:41.837688 | orchestrator | Sunday 29 March 2026 03:02:39 +0000 (0:00:01.525) 0:00:20.132 ********** 2026-03-29 03:02:41.837697 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:41.837706 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:41.837715 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:41.837725 | orchestrator | 2026-03-29 03:02:41.837734 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-29 03:02:41.837766 | orchestrator | Sunday 29 March 2026 03:02:40 +0000 (0:00:00.747) 0:00:20.880 ********** 2026-03-29 03:02:41.837776 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:41.837786 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:41.837795 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:41.837804 | orchestrator | 2026-03-29 03:02:41.837813 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-29 03:02:41.837822 | orchestrator | Sunday 29 March 2026 03:02:40 +0000 (0:00:00.518) 0:00:21.398 ********** 2026-03-29 03:02:41.837831 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:41.837840 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:41.837850 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:02:41.837860 | orchestrator | 2026-03-29 03:02:41.837870 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-29 03:02:41.837911 | orchestrator | Sunday 29 March 2026 03:02:41 +0000 (0:00:00.301) 0:00:21.699 ********** 2026-03-29 03:02:41.838105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:41.838127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:41.838138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:41.838148 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:02:41.838168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:02:41.838192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:02:41.838213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:02:41.838222 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:02:41.838241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 03:03:00.312042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 03:03:00.312173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 03:03:00.312188 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:03:00.312200 | orchestrator | 2026-03-29 03:03:00.312210 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:03:00.312220 | orchestrator | Sunday 29 March 2026 03:02:41 +0000 (0:00:00.564) 0:00:22.263 ********** 2026-03-29 03:03:00.312228 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:03:00.312236 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:03:00.312244 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:03:00.312259 | orchestrator | 2026-03-29 03:03:00.312268 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-29 03:03:00.312281 | orchestrator | Sunday 29 March 2026 03:02:42 +0000 (0:00:00.285) 0:00:22.549 ********** 2026-03-29 03:03:00.312331 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 03:03:00.312366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 03:03:00.312380 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 03:03:00.312391 | orchestrator | 2026-03-29 03:03:00.312422 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-29 03:03:00.312436 | orchestrator | Sunday 29 March 2026 03:02:43 +0000 (0:00:01.736) 0:00:24.286 ********** 2026-03-29 03:03:00.312449 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:03:00.312462 | orchestrator | 2026-03-29 03:03:00.312476 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-29 03:03:00.312489 | orchestrator | Sunday 29 March 2026 03:02:44 +0000 (0:00:00.922) 0:00:25.208 ********** 2026-03-29 03:03:00.312502 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:03:00.312513 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:03:00.312527 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:03:00.312540 | orchestrator | 2026-03-29 03:03:00.312553 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-29 03:03:00.312566 | orchestrator | Sunday 29 March 2026 03:02:45 +0000 (0:00:00.536) 0:00:25.744 ********** 2026-03-29 03:03:00.312581 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:03:00.312593 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:03:00.312606 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:03:00.312635 | orchestrator | 2026-03-29 03:03:00.312650 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-29 03:03:00.312664 | orchestrator | Sunday 29 March 2026 03:02:46 +0000 (0:00:00.961) 0:00:26.706 ********** 2026-03-29 03:03:00.312676 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:03:00.312691 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:03:00.312704 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:03:00.312718 | orchestrator | 2026-03-29 03:03:00.312731 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-29 03:03:00.312745 | orchestrator | Sunday 29 March 2026 03:02:46 +0000 (0:00:00.419) 0:00:27.125 ********** 2026-03-29 03:03:00.312759 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 03:03:00.312773 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 03:03:00.312787 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 03:03:00.312800 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 03:03:00.312815 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 03:03:00.312829 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 03:03:00.312858 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 03:03:00.312873 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 03:03:00.312948 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 03:03:00.312965 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 03:03:00.312979 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 03:03:00.312993 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 03:03:00.313007 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 03:03:00.313018 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 03:03:00.313038 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 03:03:00.313047 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:03:00.313055 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:03:00.313063 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:03:00.313071 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:03:00.313078 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:03:00.313086 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:03:00.313094 | orchestrator | 2026-03-29 03:03:00.313114 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-29 03:03:00.313122 | orchestrator | Sunday 29 March 2026 03:02:55 +0000 (0:00:08.686) 0:00:35.812 ********** 2026-03-29 03:03:00.313130 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:03:00.313138 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:03:00.313146 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:03:00.313154 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:03:00.313162 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:03:00.313177 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:03:00.313185 | orchestrator | 2026-03-29 03:03:00.313194 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-29 03:03:00.313202 | orchestrator | Sunday 29 March 2026 03:02:57 +0000 (0:00:02.520) 0:00:38.333 ********** 2026-03-29 03:03:00.313213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:03:00.313233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:04:37.224946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 03:04:37.225041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 03:04:37.225133 | orchestrator | 2026-03-29 03:04:37.225140 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:04:37.225149 | orchestrator | Sunday 29 March 2026 03:03:00 +0000 (0:00:02.404) 0:00:40.738 ********** 2026-03-29 03:04:37.225155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:04:37.225163 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:04:37.225170 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:04:37.225174 | orchestrator | 2026-03-29 03:04:37.225178 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-29 03:04:37.225182 | orchestrator | Sunday 29 March 2026 03:03:00 +0000 (0:00:00.412) 0:00:41.151 ********** 2026-03-29 03:04:37.225186 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225190 | orchestrator | 2026-03-29 03:04:37.225193 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-29 03:04:37.225197 | orchestrator | Sunday 29 March 2026 03:03:03 +0000 (0:00:02.645) 0:00:43.796 ********** 2026-03-29 03:04:37.225201 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225206 | orchestrator | 2026-03-29 03:04:37.225212 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-29 03:04:37.225218 | orchestrator | Sunday 29 March 2026 03:03:05 +0000 (0:00:02.573) 0:00:46.370 ********** 2026-03-29 03:04:37.225224 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:04:37.225230 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:04:37.225235 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:04:37.225240 | orchestrator | 2026-03-29 03:04:37.225246 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-29 03:04:37.225257 | orchestrator | Sunday 29 March 2026 03:03:06 +0000 (0:00:00.800) 0:00:47.171 ********** 2026-03-29 03:04:37.225263 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:04:37.225269 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:04:37.225276 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:04:37.225280 | orchestrator | 2026-03-29 03:04:37.225284 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-29 03:04:37.225289 | orchestrator | Sunday 29 March 2026 03:03:07 +0000 (0:00:00.327) 0:00:47.499 ********** 2026-03-29 03:04:37.225293 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:04:37.225297 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:04:37.225300 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:04:37.225304 | orchestrator | 2026-03-29 03:04:37.225308 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-29 03:04:37.225312 | orchestrator | Sunday 29 March 2026 03:03:07 +0000 (0:00:00.549) 0:00:48.048 ********** 2026-03-29 03:04:37.225315 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225319 | orchestrator | 2026-03-29 03:04:37.225323 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-29 03:04:37.225327 | orchestrator | Sunday 29 March 2026 03:03:23 +0000 (0:00:15.777) 0:01:03.826 ********** 2026-03-29 03:04:37.225330 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225339 | orchestrator | 2026-03-29 03:04:37.225343 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 03:04:37.225347 | orchestrator | Sunday 29 March 2026 03:03:34 +0000 (0:00:11.330) 0:01:15.156 ********** 2026-03-29 03:04:37.225352 | orchestrator | 2026-03-29 03:04:37.225358 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 03:04:37.225365 | orchestrator | Sunday 29 March 2026 03:03:34 +0000 (0:00:00.064) 0:01:15.221 ********** 2026-03-29 03:04:37.225374 | orchestrator | 2026-03-29 03:04:37.225381 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 03:04:37.225387 | orchestrator | Sunday 29 March 2026 03:03:34 +0000 (0:00:00.086) 0:01:15.307 ********** 2026-03-29 03:04:37.225392 | orchestrator | 2026-03-29 03:04:37.225398 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-29 03:04:37.225404 | orchestrator | Sunday 29 March 2026 03:03:34 +0000 (0:00:00.072) 0:01:15.380 ********** 2026-03-29 03:04:37.225410 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225416 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:04:37.225421 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:04:37.225426 | orchestrator | 2026-03-29 03:04:37.225432 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-29 03:04:37.225438 | orchestrator | Sunday 29 March 2026 03:04:19 +0000 (0:00:44.419) 0:01:59.800 ********** 2026-03-29 03:04:37.225444 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225450 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:04:37.225456 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:04:37.225461 | orchestrator | 2026-03-29 03:04:37.225467 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-29 03:04:37.225473 | orchestrator | Sunday 29 March 2026 03:04:29 +0000 (0:00:10.456) 0:02:10.256 ********** 2026-03-29 03:04:37.225479 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:04:37.225508 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:04:37.225514 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:04:37.225519 | orchestrator | 2026-03-29 03:04:37.225523 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:04:37.225528 | orchestrator | Sunday 29 March 2026 03:04:36 +0000 (0:00:06.876) 0:02:17.132 ********** 2026-03-29 03:04:37.225539 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:05:34.145268 | orchestrator | 2026-03-29 03:05:34.145399 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-29 03:05:34.145423 | orchestrator | Sunday 29 March 2026 03:04:37 +0000 (0:00:00.520) 0:02:17.653 ********** 2026-03-29 03:05:34.145438 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:05:34.145454 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:05:34.145469 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:05:34.145483 | orchestrator | 2026-03-29 03:05:34.145497 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-29 03:05:34.145511 | orchestrator | Sunday 29 March 2026 03:04:38 +0000 (0:00:01.020) 0:02:18.674 ********** 2026-03-29 03:05:34.145526 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:05:34.145541 | orchestrator | 2026-03-29 03:05:34.145556 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-29 03:05:34.145571 | orchestrator | Sunday 29 March 2026 03:04:39 +0000 (0:00:01.669) 0:02:20.343 ********** 2026-03-29 03:05:34.145586 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-29 03:05:34.145600 | orchestrator | 2026-03-29 03:05:34.145613 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-29 03:05:34.145628 | orchestrator | Sunday 29 March 2026 03:04:53 +0000 (0:00:13.402) 0:02:33.745 ********** 2026-03-29 03:05:34.145644 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-29 03:05:34.145659 | orchestrator | 2026-03-29 03:05:34.145674 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-29 03:05:34.145750 | orchestrator | Sunday 29 March 2026 03:05:21 +0000 (0:00:27.977) 0:03:01.723 ********** 2026-03-29 03:05:34.145767 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-29 03:05:34.145784 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-29 03:05:34.145799 | orchestrator | 2026-03-29 03:05:34.145814 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-29 03:05:34.145830 | orchestrator | Sunday 29 March 2026 03:05:28 +0000 (0:00:07.529) 0:03:09.253 ********** 2026-03-29 03:05:34.145846 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:05:34.145862 | orchestrator | 2026-03-29 03:05:34.145876 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-29 03:05:34.145891 | orchestrator | Sunday 29 March 2026 03:05:28 +0000 (0:00:00.126) 0:03:09.380 ********** 2026-03-29 03:05:34.145905 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:05:34.145918 | orchestrator | 2026-03-29 03:05:34.145948 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-29 03:05:34.145965 | orchestrator | Sunday 29 March 2026 03:05:29 +0000 (0:00:00.140) 0:03:09.520 ********** 2026-03-29 03:05:34.145980 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:05:34.145993 | orchestrator | 2026-03-29 03:05:34.146007 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-29 03:05:34.146085 | orchestrator | Sunday 29 March 2026 03:05:29 +0000 (0:00:00.117) 0:03:09.638 ********** 2026-03-29 03:05:34.146102 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:05:34.146117 | orchestrator | 2026-03-29 03:05:34.146132 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-29 03:05:34.146145 | orchestrator | Sunday 29 March 2026 03:05:29 +0000 (0:00:00.530) 0:03:10.169 ********** 2026-03-29 03:05:34.146158 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:05:34.146172 | orchestrator | 2026-03-29 03:05:34.146186 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 03:05:34.146200 | orchestrator | Sunday 29 March 2026 03:05:33 +0000 (0:00:03.492) 0:03:13.661 ********** 2026-03-29 03:05:34.146214 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:05:34.146226 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:05:34.146239 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:05:34.146250 | orchestrator | 2026-03-29 03:05:34.146262 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:05:34.146276 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 03:05:34.146290 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:05:34.146302 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:05:34.146314 | orchestrator | 2026-03-29 03:05:34.146326 | orchestrator | 2026-03-29 03:05:34.146339 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:05:34.146351 | orchestrator | Sunday 29 March 2026 03:05:33 +0000 (0:00:00.496) 0:03:14.158 ********** 2026-03-29 03:05:34.146364 | orchestrator | =============================================================================== 2026-03-29 03:05:34.146375 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 44.42s 2026-03-29 03:05:34.146388 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.98s 2026-03-29 03:05:34.146400 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.78s 2026-03-29 03:05:34.146413 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.40s 2026-03-29 03:05:34.146425 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.33s 2026-03-29 03:05:34.146437 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.46s 2026-03-29 03:05:34.146463 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.69s 2026-03-29 03:05:34.146477 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.53s 2026-03-29 03:05:34.146489 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.88s 2026-03-29 03:05:34.146522 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.70s 2026-03-29 03:05:34.146536 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.58s 2026-03-29 03:05:34.146548 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.56s 2026-03-29 03:05:34.146561 | orchestrator | keystone : Creating default user role ----------------------------------- 3.49s 2026-03-29 03:05:34.146573 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.65s 2026-03-29 03:05:34.146586 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.57s 2026-03-29 03:05:34.146598 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.52s 2026-03-29 03:05:34.146611 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2026-03-29 03:05:34.146624 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.85s 2026-03-29 03:05:34.146636 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.74s 2026-03-29 03:05:34.146648 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.67s 2026-03-29 03:05:36.434754 | orchestrator | 2026-03-29 03:05:36 | INFO  | Task 9a583d54-8dda-4b21-86d7-5370fc8626e0 (placement) was prepared for execution. 2026-03-29 03:05:36.434882 | orchestrator | 2026-03-29 03:05:36 | INFO  | It takes a moment until task 9a583d54-8dda-4b21-86d7-5370fc8626e0 (placement) has been started and output is visible here. 2026-03-29 03:06:13.103342 | orchestrator | 2026-03-29 03:06:13.103494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:06:13.103524 | orchestrator | 2026-03-29 03:06:13.103541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:06:13.103553 | orchestrator | Sunday 29 March 2026 03:05:40 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-29 03:06:13.103566 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:06:13.103578 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:06:13.103589 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:06:13.103601 | orchestrator | 2026-03-29 03:06:13.103612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:06:13.103623 | orchestrator | Sunday 29 March 2026 03:05:40 +0000 (0:00:00.310) 0:00:00.571 ********** 2026-03-29 03:06:13.103690 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-29 03:06:13.103705 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-29 03:06:13.103716 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-29 03:06:13.103727 | orchestrator | 2026-03-29 03:06:13.103738 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-29 03:06:13.103749 | orchestrator | 2026-03-29 03:06:13.103760 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 03:06:13.103772 | orchestrator | Sunday 29 March 2026 03:05:41 +0000 (0:00:00.457) 0:00:01.028 ********** 2026-03-29 03:06:13.103783 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:06:13.103795 | orchestrator | 2026-03-29 03:06:13.103806 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-29 03:06:13.103817 | orchestrator | Sunday 29 March 2026 03:05:41 +0000 (0:00:00.540) 0:00:01.569 ********** 2026-03-29 03:06:13.103828 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-29 03:06:13.103839 | orchestrator | 2026-03-29 03:06:13.103852 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-29 03:06:13.103896 | orchestrator | Sunday 29 March 2026 03:05:46 +0000 (0:00:04.180) 0:00:05.749 ********** 2026-03-29 03:06:13.103910 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-29 03:06:13.103924 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-29 03:06:13.103937 | orchestrator | 2026-03-29 03:06:13.103950 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-29 03:06:13.103963 | orchestrator | Sunday 29 March 2026 03:05:53 +0000 (0:00:07.046) 0:00:12.796 ********** 2026-03-29 03:06:13.103976 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-29 03:06:13.103989 | orchestrator | 2026-03-29 03:06:13.104001 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-29 03:06:13.104014 | orchestrator | Sunday 29 March 2026 03:05:57 +0000 (0:00:03.996) 0:00:16.792 ********** 2026-03-29 03:06:13.104027 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:06:13.104040 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-29 03:06:13.104053 | orchestrator | 2026-03-29 03:06:13.104065 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-29 03:06:13.104078 | orchestrator | Sunday 29 March 2026 03:06:01 +0000 (0:00:04.454) 0:00:21.247 ********** 2026-03-29 03:06:13.104090 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:06:13.104104 | orchestrator | 2026-03-29 03:06:13.104116 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-29 03:06:13.104127 | orchestrator | Sunday 29 March 2026 03:06:04 +0000 (0:00:03.404) 0:00:24.652 ********** 2026-03-29 03:06:13.104138 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-29 03:06:13.104149 | orchestrator | 2026-03-29 03:06:13.104159 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 03:06:13.104170 | orchestrator | Sunday 29 March 2026 03:06:08 +0000 (0:00:03.885) 0:00:28.538 ********** 2026-03-29 03:06:13.104181 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:13.104192 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:06:13.104202 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:06:13.104213 | orchestrator | 2026-03-29 03:06:13.104224 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-29 03:06:13.104235 | orchestrator | Sunday 29 March 2026 03:06:09 +0000 (0:00:00.335) 0:00:28.873 ********** 2026-03-29 03:06:13.104250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:13.104295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:13.104317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:13.104329 | orchestrator | 2026-03-29 03:06:13.104340 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-29 03:06:13.104351 | orchestrator | Sunday 29 March 2026 03:06:10 +0000 (0:00:01.062) 0:00:29.936 ********** 2026-03-29 03:06:13.104362 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:13.104373 | orchestrator | 2026-03-29 03:06:13.104384 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-29 03:06:13.104395 | orchestrator | Sunday 29 March 2026 03:06:10 +0000 (0:00:00.336) 0:00:30.272 ********** 2026-03-29 03:06:13.104406 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:13.104416 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:06:13.104427 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:06:13.104438 | orchestrator | 2026-03-29 03:06:13.104449 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 03:06:13.104460 | orchestrator | Sunday 29 March 2026 03:06:10 +0000 (0:00:00.311) 0:00:30.584 ********** 2026-03-29 03:06:13.104471 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:06:13.104482 | orchestrator | 2026-03-29 03:06:13.104493 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-29 03:06:13.104504 | orchestrator | Sunday 29 March 2026 03:06:11 +0000 (0:00:00.542) 0:00:31.126 ********** 2026-03-29 03:06:13.104515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:13.104538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:15.977765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:15.977873 | orchestrator | 2026-03-29 03:06:15.977889 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-29 03:06:15.977903 | orchestrator | Sunday 29 March 2026 03:06:13 +0000 (0:00:01.647) 0:00:32.773 ********** 2026-03-29 03:06:15.977916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.977929 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:15.977942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.977954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:06:15.977965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.978002 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:06:15.978014 | orchestrator | 2026-03-29 03:06:15.978082 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-29 03:06:15.978126 | orchestrator | Sunday 29 March 2026 03:06:13 +0000 (0:00:00.541) 0:00:33.315 ********** 2026-03-29 03:06:15.978156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.978179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:15.978202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.978224 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:06:15.978239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:15.978253 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:06:15.978265 | orchestrator | 2026-03-29 03:06:15.978279 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-29 03:06:15.978304 | orchestrator | Sunday 29 March 2026 03:06:14 +0000 (0:00:00.710) 0:00:34.026 ********** 2026-03-29 03:06:15.978317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:15.978346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:23.133173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:23.133278 | orchestrator | 2026-03-29 03:06:23.133292 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-29 03:06:23.133301 | orchestrator | Sunday 29 March 2026 03:06:15 +0000 (0:00:01.628) 0:00:35.655 ********** 2026-03-29 03:06:23.133309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:23.133338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:23.133359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:23.133365 | orchestrator | 2026-03-29 03:06:23.133371 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-29 03:06:23.133377 | orchestrator | Sunday 29 March 2026 03:06:18 +0000 (0:00:02.354) 0:00:38.010 ********** 2026-03-29 03:06:23.133398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 03:06:23.133406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 03:06:23.133411 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 03:06:23.133417 | orchestrator | 2026-03-29 03:06:23.133424 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-29 03:06:23.133430 | orchestrator | Sunday 29 March 2026 03:06:19 +0000 (0:00:01.588) 0:00:39.598 ********** 2026-03-29 03:06:23.133436 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:06:23.133444 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:06:23.133451 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:06:23.133457 | orchestrator | 2026-03-29 03:06:23.133464 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-29 03:06:23.133471 | orchestrator | Sunday 29 March 2026 03:06:21 +0000 (0:00:01.389) 0:00:40.988 ********** 2026-03-29 03:06:23.133477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:23.133493 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:06:23.133500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:23.133507 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:06:23.133518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 03:06:23.133525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:06:23.133531 | orchestrator | 2026-03-29 03:06:23.133538 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-29 03:06:23.133544 | orchestrator | Sunday 29 March 2026 03:06:22 +0000 (0:00:00.739) 0:00:41.727 ********** 2026-03-29 03:06:23.133558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:53.794442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:53.794652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 03:06:53.794677 | orchestrator | 2026-03-29 03:06:53.794694 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-29 03:06:53.794710 | orchestrator | Sunday 29 March 2026 03:06:23 +0000 (0:00:01.086) 0:00:42.814 ********** 2026-03-29 03:06:53.794724 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:06:53.794738 | orchestrator | 2026-03-29 03:06:53.794753 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-29 03:06:53.794766 | orchestrator | Sunday 29 March 2026 03:06:25 +0000 (0:00:02.302) 0:00:45.116 ********** 2026-03-29 03:06:53.794778 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:06:53.794791 | orchestrator | 2026-03-29 03:06:53.794804 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-29 03:06:53.794818 | orchestrator | Sunday 29 March 2026 03:06:27 +0000 (0:00:02.313) 0:00:47.430 ********** 2026-03-29 03:06:53.794831 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:06:53.794842 | orchestrator | 2026-03-29 03:06:53.794855 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 03:06:53.794868 | orchestrator | Sunday 29 March 2026 03:06:42 +0000 (0:00:14.936) 0:01:02.366 ********** 2026-03-29 03:06:53.794881 | orchestrator | 2026-03-29 03:06:53.794894 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 03:06:53.794908 | orchestrator | Sunday 29 March 2026 03:06:42 +0000 (0:00:00.070) 0:01:02.437 ********** 2026-03-29 03:06:53.794919 | orchestrator | 2026-03-29 03:06:53.794932 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 03:06:53.794946 | orchestrator | Sunday 29 March 2026 03:06:42 +0000 (0:00:00.068) 0:01:02.506 ********** 2026-03-29 03:06:53.794957 | orchestrator | 2026-03-29 03:06:53.794997 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-29 03:06:53.795012 | orchestrator | Sunday 29 March 2026 03:06:42 +0000 (0:00:00.069) 0:01:02.576 ********** 2026-03-29 03:06:53.795026 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:06:53.795040 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:06:53.795054 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:06:53.795067 | orchestrator | 2026-03-29 03:06:53.795081 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:06:53.795096 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:06:53.795110 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:06:53.795124 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:06:53.795149 | orchestrator | 2026-03-29 03:06:53.795162 | orchestrator | 2026-03-29 03:06:53.795175 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:06:53.795189 | orchestrator | Sunday 29 March 2026 03:06:53 +0000 (0:00:10.659) 0:01:13.235 ********** 2026-03-29 03:06:53.795201 | orchestrator | =============================================================================== 2026-03-29 03:06:53.795214 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.94s 2026-03-29 03:06:53.795248 | orchestrator | placement : Restart placement-api container ---------------------------- 10.66s 2026-03-29 03:06:53.795263 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.05s 2026-03-29 03:06:53.795276 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.45s 2026-03-29 03:06:53.795290 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.18s 2026-03-29 03:06:53.795302 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.00s 2026-03-29 03:06:53.795314 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.89s 2026-03-29 03:06:53.795327 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.40s 2026-03-29 03:06:53.795339 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.35s 2026-03-29 03:06:53.795352 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.31s 2026-03-29 03:06:53.795364 | orchestrator | placement : Creating placement databases -------------------------------- 2.30s 2026-03-29 03:06:53.795377 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.65s 2026-03-29 03:06:53.795390 | orchestrator | placement : Copying over config.json files for services ----------------- 1.63s 2026-03-29 03:06:53.795402 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.59s 2026-03-29 03:06:53.795414 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.39s 2026-03-29 03:06:53.795425 | orchestrator | placement : Check placement containers ---------------------------------- 1.09s 2026-03-29 03:06:53.795438 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.06s 2026-03-29 03:06:53.795450 | orchestrator | placement : Copying over existing policy file --------------------------- 0.74s 2026-03-29 03:06:53.795462 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-03-29 03:06:53.795475 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2026-03-29 03:06:55.796469 | orchestrator | 2026-03-29 03:06:55 | INFO  | Task 13c3865e-354a-4ab3-bbca-3395e1362166 (neutron) was prepared for execution. 2026-03-29 03:06:55.796664 | orchestrator | 2026-03-29 03:06:55 | INFO  | It takes a moment until task 13c3865e-354a-4ab3-bbca-3395e1362166 (neutron) has been started and output is visible here. 2026-03-29 03:07:45.257879 | orchestrator | 2026-03-29 03:07:45.257998 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:07:45.258012 | orchestrator | 2026-03-29 03:07:45.258058 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:07:45.258067 | orchestrator | Sunday 29 March 2026 03:06:59 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-29 03:07:45.258126 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:07:45.258135 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:07:45.258141 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:07:45.258146 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:07:45.258151 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:07:45.258156 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:07:45.258161 | orchestrator | 2026-03-29 03:07:45.258167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:07:45.258172 | orchestrator | Sunday 29 March 2026 03:07:00 +0000 (0:00:00.532) 0:00:00.787 ********** 2026-03-29 03:07:45.258177 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-29 03:07:45.258183 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-29 03:07:45.258207 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-29 03:07:45.258212 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-29 03:07:45.258217 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-29 03:07:45.258222 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-29 03:07:45.258227 | orchestrator | 2026-03-29 03:07:45.258232 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-29 03:07:45.258237 | orchestrator | 2026-03-29 03:07:45.258253 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 03:07:45.258261 | orchestrator | Sunday 29 March 2026 03:07:01 +0000 (0:00:00.550) 0:00:01.337 ********** 2026-03-29 03:07:45.258270 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:07:45.258280 | orchestrator | 2026-03-29 03:07:45.258287 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-29 03:07:45.258295 | orchestrator | Sunday 29 March 2026 03:07:02 +0000 (0:00:01.040) 0:00:02.378 ********** 2026-03-29 03:07:45.258303 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:07:45.258312 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:07:45.258321 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:07:45.258329 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:07:45.258337 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:07:45.258344 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:07:45.258352 | orchestrator | 2026-03-29 03:07:45.258360 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-29 03:07:45.258368 | orchestrator | Sunday 29 March 2026 03:07:03 +0000 (0:00:01.180) 0:00:03.559 ********** 2026-03-29 03:07:45.258376 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:07:45.258384 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:07:45.258392 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:07:45.258400 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:07:45.258408 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:07:45.258415 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:07:45.258420 | orchestrator | 2026-03-29 03:07:45.258426 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-29 03:07:45.258432 | orchestrator | Sunday 29 March 2026 03:07:04 +0000 (0:00:01.084) 0:00:04.643 ********** 2026-03-29 03:07:45.258437 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 03:07:45.258444 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258450 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258456 | orchestrator | } 2026-03-29 03:07:45.258462 | orchestrator | ok: [testbed-node-1] => { 2026-03-29 03:07:45.258467 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258473 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258479 | orchestrator | } 2026-03-29 03:07:45.258484 | orchestrator | ok: [testbed-node-2] => { 2026-03-29 03:07:45.258490 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258495 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258501 | orchestrator | } 2026-03-29 03:07:45.258506 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 03:07:45.258512 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258517 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258523 | orchestrator | } 2026-03-29 03:07:45.258529 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 03:07:45.258564 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258573 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258582 | orchestrator | } 2026-03-29 03:07:45.258591 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 03:07:45.258599 | orchestrator |  "changed": false, 2026-03-29 03:07:45.258608 | orchestrator |  "msg": "All assertions passed" 2026-03-29 03:07:45.258617 | orchestrator | } 2026-03-29 03:07:45.258625 | orchestrator | 2026-03-29 03:07:45.258634 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-29 03:07:45.258643 | orchestrator | Sunday 29 March 2026 03:07:05 +0000 (0:00:00.792) 0:00:05.436 ********** 2026-03-29 03:07:45.258658 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:45.258664 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:07:45.258669 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:07:45.258676 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:07:45.258684 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:45.258693 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:07:45.258701 | orchestrator | 2026-03-29 03:07:45.258709 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-29 03:07:45.258718 | orchestrator | Sunday 29 March 2026 03:07:05 +0000 (0:00:00.620) 0:00:06.056 ********** 2026-03-29 03:07:45.258727 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-29 03:07:45.258736 | orchestrator | 2026-03-29 03:07:45.258744 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-29 03:07:45.258752 | orchestrator | Sunday 29 March 2026 03:07:09 +0000 (0:00:04.198) 0:00:10.255 ********** 2026-03-29 03:07:45.258758 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-29 03:07:45.258766 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-29 03:07:45.258771 | orchestrator | 2026-03-29 03:07:45.258792 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-29 03:07:45.258798 | orchestrator | Sunday 29 March 2026 03:07:17 +0000 (0:00:07.198) 0:00:17.453 ********** 2026-03-29 03:07:45.258803 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:07:45.258808 | orchestrator | 2026-03-29 03:07:45.258813 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-29 03:07:45.258818 | orchestrator | Sunday 29 March 2026 03:07:20 +0000 (0:00:03.516) 0:00:20.970 ********** 2026-03-29 03:07:45.258825 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:07:45.258835 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-29 03:07:45.258844 | orchestrator | 2026-03-29 03:07:45.258851 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-29 03:07:45.258860 | orchestrator | Sunday 29 March 2026 03:07:25 +0000 (0:00:04.439) 0:00:25.410 ********** 2026-03-29 03:07:45.258868 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:07:45.258876 | orchestrator | 2026-03-29 03:07:45.258884 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-29 03:07:45.258889 | orchestrator | Sunday 29 March 2026 03:07:28 +0000 (0:00:03.554) 0:00:28.964 ********** 2026-03-29 03:07:45.258896 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-29 03:07:45.258903 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-29 03:07:45.258912 | orchestrator | 2026-03-29 03:07:45.258920 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 03:07:45.258934 | orchestrator | Sunday 29 March 2026 03:07:37 +0000 (0:00:08.558) 0:00:37.522 ********** 2026-03-29 03:07:45.258942 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:45.258951 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:07:45.258956 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:07:45.258961 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:07:45.258966 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:45.258970 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:07:45.258975 | orchestrator | 2026-03-29 03:07:45.258980 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-29 03:07:45.258985 | orchestrator | Sunday 29 March 2026 03:07:37 +0000 (0:00:00.782) 0:00:38.305 ********** 2026-03-29 03:07:45.258990 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:07:45.258994 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:45.258999 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:07:45.259004 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:07:45.259008 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:45.259018 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:07:45.259023 | orchestrator | 2026-03-29 03:07:45.259028 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-29 03:07:45.259033 | orchestrator | Sunday 29 March 2026 03:07:40 +0000 (0:00:02.034) 0:00:40.340 ********** 2026-03-29 03:07:45.259038 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:07:45.259043 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:07:45.259050 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:07:45.259058 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:07:45.259066 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:07:45.259075 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:07:45.259082 | orchestrator | 2026-03-29 03:07:45.259091 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 03:07:45.259099 | orchestrator | Sunday 29 March 2026 03:07:41 +0000 (0:00:01.179) 0:00:41.520 ********** 2026-03-29 03:07:45.259106 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:45.259115 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:07:45.259120 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:07:45.259128 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:07:45.259135 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:45.259143 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:07:45.259151 | orchestrator | 2026-03-29 03:07:45.259159 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-29 03:07:45.259167 | orchestrator | Sunday 29 March 2026 03:07:43 +0000 (0:00:01.849) 0:00:43.370 ********** 2026-03-29 03:07:45.259179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:45.259197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:50.654699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:50.654862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:50.654889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:50.654904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:50.654919 | orchestrator | 2026-03-29 03:07:50.654938 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-29 03:07:50.654955 | orchestrator | Sunday 29 March 2026 03:07:45 +0000 (0:00:02.177) 0:00:45.548 ********** 2026-03-29 03:07:50.654970 | orchestrator | [WARNING]: Skipped 2026-03-29 03:07:50.654986 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-29 03:07:50.655003 | orchestrator | due to this access issue: 2026-03-29 03:07:50.655019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-29 03:07:50.655033 | orchestrator | a directory 2026-03-29 03:07:50.655048 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:07:50.655062 | orchestrator | 2026-03-29 03:07:50.655077 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 03:07:50.655093 | orchestrator | Sunday 29 March 2026 03:07:46 +0000 (0:00:00.786) 0:00:46.334 ********** 2026-03-29 03:07:50.655110 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:07:50.655126 | orchestrator | 2026-03-29 03:07:50.655139 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-29 03:07:50.655178 | orchestrator | Sunday 29 March 2026 03:07:47 +0000 (0:00:01.274) 0:00:47.608 ********** 2026-03-29 03:07:50.655221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:50.655240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:50.655256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:07:50.655270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:50.655296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:55.159321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:07:55.159399 | orchestrator | 2026-03-29 03:07:55.159409 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-29 03:07:55.159417 | orchestrator | Sunday 29 March 2026 03:07:50 +0000 (0:00:03.335) 0:00:50.944 ********** 2026-03-29 03:07:55.159426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:07:55.159435 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:55.159443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:07:55.159450 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:07:55.159457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:07:55.159488 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:07:55.159512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:07:55.159519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:07:55.159573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:07:55.159580 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:55.159588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:07:55.159594 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:07:55.159599 | orchestrator | 2026-03-29 03:07:55.159606 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-29 03:07:55.159611 | orchestrator | Sunday 29 March 2026 03:07:52 +0000 (0:00:01.856) 0:00:52.800 ********** 2026-03-29 03:07:55.159618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:07:55.159624 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:07:55.159638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:07:55.159644 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:07:55.159668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:00.275819 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:00.275927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:00.275941 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:00.275951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:00.275959 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:00.275966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:00.275994 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:00.276002 | orchestrator | 2026-03-29 03:08:00.276010 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-29 03:08:00.276018 | orchestrator | Sunday 29 March 2026 03:07:55 +0000 (0:00:02.646) 0:00:55.447 ********** 2026-03-29 03:08:00.276025 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:00.276032 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:00.276038 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:00.276045 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:00.276052 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:00.276058 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:00.276065 | orchestrator | 2026-03-29 03:08:00.276072 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-29 03:08:00.276079 | orchestrator | Sunday 29 March 2026 03:07:57 +0000 (0:00:02.260) 0:00:57.707 ********** 2026-03-29 03:08:00.276085 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:00.276092 | orchestrator | 2026-03-29 03:08:00.276099 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-29 03:08:00.276105 | orchestrator | Sunday 29 March 2026 03:07:57 +0000 (0:00:00.133) 0:00:57.841 ********** 2026-03-29 03:08:00.276112 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:00.276119 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:00.276125 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:00.276132 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:00.276138 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:00.276145 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:00.276152 | orchestrator | 2026-03-29 03:08:00.276158 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-29 03:08:00.276165 | orchestrator | Sunday 29 March 2026 03:07:58 +0000 (0:00:00.613) 0:00:58.454 ********** 2026-03-29 03:08:00.276198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:00.276207 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:00.276214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:00.276229 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:00.276236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:00.276244 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:00.276251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:00.276262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:00.276270 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:00.276276 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:00.276289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:08.653306 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:08.653407 | orchestrator | 2026-03-29 03:08:08.653418 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-29 03:08:08.653428 | orchestrator | Sunday 29 March 2026 03:08:00 +0000 (0:00:02.107) 0:01:00.562 ********** 2026-03-29 03:08:08.653461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:08.653600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:08.653616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:08.653623 | orchestrator | 2026-03-29 03:08:08.653631 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-29 03:08:08.653638 | orchestrator | Sunday 29 March 2026 03:08:03 +0000 (0:00:03.128) 0:01:03.690 ********** 2026-03-29 03:08:08.653645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:08.653684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:16.938126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:16.938240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:08:16.938259 | orchestrator | 2026-03-29 03:08:16.938274 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-29 03:08:16.938287 | orchestrator | Sunday 29 March 2026 03:08:08 +0000 (0:00:05.257) 0:01:08.947 ********** 2026-03-29 03:08:16.938316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:16.938330 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:16.938342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:16.938378 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:16.938409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:16.938421 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:16.938433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:16.938444 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:16.938455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:16.938467 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:16.938484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:16.938538 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:16.938552 | orchestrator | 2026-03-29 03:08:16.938564 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-29 03:08:16.938575 | orchestrator | Sunday 29 March 2026 03:08:10 +0000 (0:00:02.204) 0:01:11.152 ********** 2026-03-29 03:08:16.938586 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:16.938597 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:16.938610 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:16.938623 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:08:16.938636 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:08:16.938648 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:08:16.938660 | orchestrator | 2026-03-29 03:08:16.938673 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-29 03:08:16.938698 | orchestrator | Sunday 29 March 2026 03:08:13 +0000 (0:00:02.794) 0:01:13.947 ********** 2026-03-29 03:08:16.938720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:33.276642 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:33.276737 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:33.276746 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:33.276784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:33.276798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:08:33.276803 | orchestrator | 2026-03-29 03:08:33.276808 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-29 03:08:33.276813 | orchestrator | Sunday 29 March 2026 03:08:16 +0000 (0:00:03.287) 0:01:17.235 ********** 2026-03-29 03:08:33.276817 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276820 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.276824 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276828 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276832 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276835 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276839 | orchestrator | 2026-03-29 03:08:33.276843 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-29 03:08:33.276847 | orchestrator | Sunday 29 March 2026 03:08:19 +0000 (0:00:02.468) 0:01:19.703 ********** 2026-03-29 03:08:33.276851 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.276854 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276858 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276862 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276866 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276869 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276873 | orchestrator | 2026-03-29 03:08:33.276877 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-29 03:08:33.276881 | orchestrator | Sunday 29 March 2026 03:08:21 +0000 (0:00:02.050) 0:01:21.754 ********** 2026-03-29 03:08:33.276885 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.276889 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276893 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276896 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276903 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276911 | orchestrator | 2026-03-29 03:08:33.276915 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-29 03:08:33.276918 | orchestrator | Sunday 29 March 2026 03:08:23 +0000 (0:00:02.032) 0:01:23.787 ********** 2026-03-29 03:08:33.276922 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276929 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.276933 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276937 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276941 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276944 | orchestrator | 2026-03-29 03:08:33.276948 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-29 03:08:33.276952 | orchestrator | Sunday 29 March 2026 03:08:25 +0000 (0:00:01.802) 0:01:25.590 ********** 2026-03-29 03:08:33.276955 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.276959 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276963 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276967 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.276970 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.276974 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.276978 | orchestrator | 2026-03-29 03:08:33.276981 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-29 03:08:33.276988 | orchestrator | Sunday 29 March 2026 03:08:27 +0000 (0:00:01.872) 0:01:27.462 ********** 2026-03-29 03:08:33.276992 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.276996 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.276999 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.277003 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.277007 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.277010 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.277014 | orchestrator | 2026-03-29 03:08:33.277018 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-29 03:08:33.277022 | orchestrator | Sunday 29 March 2026 03:08:29 +0000 (0:00:01.959) 0:01:29.422 ********** 2026-03-29 03:08:33.277026 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277030 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:33.277034 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277038 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:33.277042 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277048 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:33.277054 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277060 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:33.277066 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277073 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:33.277078 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 03:08:33.277084 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:33.277090 | orchestrator | 2026-03-29 03:08:33.277096 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-29 03:08:33.277101 | orchestrator | Sunday 29 March 2026 03:08:31 +0000 (0:00:02.092) 0:01:31.515 ********** 2026-03-29 03:08:33.277113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.485868 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:35.485979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.485989 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:35.486008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.486051 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:35.486058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:35.486064 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:08:35.486069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:35.486091 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:35.486109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:35.486114 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:08:35.486119 | orchestrator | 2026-03-29 03:08:35.486124 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-29 03:08:35.486130 | orchestrator | Sunday 29 March 2026 03:08:33 +0000 (0:00:02.053) 0:01:33.568 ********** 2026-03-29 03:08:35.486135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.486140 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:08:35.486148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.486153 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:08:35.486158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:08:35.486167 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:08:35.486172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:08:35.486176 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:08:35.486185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:09:00.602114 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.602237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:09:00.602255 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.602266 | orchestrator | 2026-03-29 03:09:00.602277 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-29 03:09:00.602288 | orchestrator | Sunday 29 March 2026 03:08:35 +0000 (0:00:02.206) 0:01:35.774 ********** 2026-03-29 03:09:00.602298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602325 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602336 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602346 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.602356 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.602366 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.602376 | orchestrator | 2026-03-29 03:09:00.602385 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-29 03:09:00.602395 | orchestrator | Sunday 29 March 2026 03:08:37 +0000 (0:00:02.037) 0:01:37.812 ********** 2026-03-29 03:09:00.602405 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602415 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602425 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602435 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:09:00.602535 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:09:00.602557 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:09:00.602574 | orchestrator | 2026-03-29 03:09:00.602591 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-29 03:09:00.602608 | orchestrator | Sunday 29 March 2026 03:08:41 +0000 (0:00:03.636) 0:01:41.449 ********** 2026-03-29 03:09:00.602626 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602644 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602661 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.602678 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602690 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.602701 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.602713 | orchestrator | 2026-03-29 03:09:00.602725 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-29 03:09:00.602737 | orchestrator | Sunday 29 March 2026 03:08:43 +0000 (0:00:02.001) 0:01:43.450 ********** 2026-03-29 03:09:00.602748 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602759 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602771 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602782 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.602794 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.602805 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.602817 | orchestrator | 2026-03-29 03:09:00.602828 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-29 03:09:00.602840 | orchestrator | Sunday 29 March 2026 03:08:45 +0000 (0:00:02.054) 0:01:45.504 ********** 2026-03-29 03:09:00.602852 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602863 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602875 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602887 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.602898 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.602909 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.602921 | orchestrator | 2026-03-29 03:09:00.602933 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-29 03:09:00.602943 | orchestrator | Sunday 29 March 2026 03:08:47 +0000 (0:00:02.052) 0:01:47.557 ********** 2026-03-29 03:09:00.602952 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.602962 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.602971 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.602981 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.602990 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.603000 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.603009 | orchestrator | 2026-03-29 03:09:00.603019 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-29 03:09:00.603028 | orchestrator | Sunday 29 March 2026 03:08:49 +0000 (0:00:02.384) 0:01:49.942 ********** 2026-03-29 03:09:00.603038 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.603048 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.603057 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.603067 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.603076 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.603086 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.603095 | orchestrator | 2026-03-29 03:09:00.603105 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-29 03:09:00.603115 | orchestrator | Sunday 29 March 2026 03:08:51 +0000 (0:00:02.131) 0:01:52.073 ********** 2026-03-29 03:09:00.603124 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.603134 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.603143 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.603153 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.603162 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.603172 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.603181 | orchestrator | 2026-03-29 03:09:00.603202 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-29 03:09:00.603230 | orchestrator | Sunday 29 March 2026 03:08:53 +0000 (0:00:02.101) 0:01:54.175 ********** 2026-03-29 03:09:00.603241 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.603263 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.603273 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.603283 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.603292 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.603302 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.603312 | orchestrator | 2026-03-29 03:09:00.603321 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-29 03:09:00.603331 | orchestrator | Sunday 29 March 2026 03:08:56 +0000 (0:00:02.314) 0:01:56.489 ********** 2026-03-29 03:09:00.603340 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603351 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.603361 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603371 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.603380 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603390 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:00.603400 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603410 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:00.603427 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603437 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:00.603446 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 03:09:00.603456 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:00.603491 | orchestrator | 2026-03-29 03:09:00.603501 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-29 03:09:00.603511 | orchestrator | Sunday 29 March 2026 03:08:58 +0000 (0:00:02.043) 0:01:58.532 ********** 2026-03-29 03:09:00.603523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:09:00.603535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:00.603546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:09:00.603563 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:00.603581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 03:09:06.168863 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:06.168971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:09:06.168986 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:06.169007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:09:06.169014 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:06.169021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 03:09:06.169028 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:06.169034 | orchestrator | 2026-03-29 03:09:06.169041 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-29 03:09:06.169050 | orchestrator | Sunday 29 March 2026 03:09:00 +0000 (0:00:02.354) 0:02:00.887 ********** 2026-03-29 03:09:06.169080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:09:06.169113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:09:06.169129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 03:09:06.169140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:09:06.169152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:09:06.169171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 03:09:06.169182 | orchestrator | 2026-03-29 03:09:06.169193 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 03:09:06.169203 | orchestrator | Sunday 29 March 2026 03:09:03 +0000 (0:00:02.586) 0:02:03.474 ********** 2026-03-29 03:09:06.169209 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:09:06.169216 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:09:06.169222 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:09:06.169228 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:09:06.169234 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:09:06.169240 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:09:06.169246 | orchestrator | 2026-03-29 03:09:06.169253 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-29 03:09:06.169259 | orchestrator | Sunday 29 March 2026 03:09:03 +0000 (0:00:00.761) 0:02:04.235 ********** 2026-03-29 03:09:06.169271 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:11:12.455411 | orchestrator | 2026-03-29 03:11:12.455521 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-29 03:11:12.455534 | orchestrator | Sunday 29 March 2026 03:09:06 +0000 (0:00:02.227) 0:02:06.462 ********** 2026-03-29 03:11:12.455541 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:11:12.455550 | orchestrator | 2026-03-29 03:11:12.455556 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-29 03:11:12.455563 | orchestrator | Sunday 29 March 2026 03:09:08 +0000 (0:00:02.376) 0:02:08.839 ********** 2026-03-29 03:11:12.455570 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:11:12.455576 | orchestrator | 2026-03-29 03:11:12.455583 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455589 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:44.806) 0:02:53.645 ********** 2026-03-29 03:11:12.455596 | orchestrator | 2026-03-29 03:11:12.455602 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455608 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.071) 0:02:53.717 ********** 2026-03-29 03:11:12.455614 | orchestrator | 2026-03-29 03:11:12.455621 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455643 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.070) 0:02:53.787 ********** 2026-03-29 03:11:12.455649 | orchestrator | 2026-03-29 03:11:12.455656 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455662 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.073) 0:02:53.861 ********** 2026-03-29 03:11:12.455668 | orchestrator | 2026-03-29 03:11:12.455674 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455681 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.068) 0:02:53.929 ********** 2026-03-29 03:11:12.455687 | orchestrator | 2026-03-29 03:11:12.455694 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 03:11:12.455700 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.069) 0:02:53.998 ********** 2026-03-29 03:11:12.455736 | orchestrator | 2026-03-29 03:11:12.455743 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-29 03:11:12.455757 | orchestrator | Sunday 29 March 2026 03:09:53 +0000 (0:00:00.072) 0:02:54.070 ********** 2026-03-29 03:11:12.455763 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:11:12.455769 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:11:12.455776 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:11:12.455782 | orchestrator | 2026-03-29 03:11:12.455787 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-29 03:11:12.455793 | orchestrator | Sunday 29 March 2026 03:10:15 +0000 (0:00:21.772) 0:03:15.843 ********** 2026-03-29 03:11:12.455799 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:11:12.455805 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:11:12.455812 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:11:12.455819 | orchestrator | 2026-03-29 03:11:12.455825 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:11:12.455833 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 03:11:12.455842 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 03:11:12.455848 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 03:11:12.455854 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 03:11:12.455861 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 03:11:12.455867 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 03:11:12.455873 | orchestrator | 2026-03-29 03:11:12.455879 | orchestrator | 2026-03-29 03:11:12.455886 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:11:12.455892 | orchestrator | Sunday 29 March 2026 03:11:11 +0000 (0:00:56.436) 0:04:12.280 ********** 2026-03-29 03:11:12.455898 | orchestrator | =============================================================================== 2026-03-29 03:11:12.455903 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.44s 2026-03-29 03:11:12.455910 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.81s 2026-03-29 03:11:12.455916 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.77s 2026-03-29 03:11:12.455922 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.56s 2026-03-29 03:11:12.455928 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.20s 2026-03-29 03:11:12.455934 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.26s 2026-03-29 03:11:12.455941 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.44s 2026-03-29 03:11:12.455947 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.20s 2026-03-29 03:11:12.455954 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.64s 2026-03-29 03:11:12.455960 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.55s 2026-03-29 03:11:12.455983 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.52s 2026-03-29 03:11:12.455990 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.34s 2026-03-29 03:11:12.455997 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.29s 2026-03-29 03:11:12.456002 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.13s 2026-03-29 03:11:12.456018 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.80s 2026-03-29 03:11:12.456025 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.65s 2026-03-29 03:11:12.456031 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.59s 2026-03-29 03:11:12.456037 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.47s 2026-03-29 03:11:12.456044 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 2.38s 2026-03-29 03:11:12.456050 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.38s 2026-03-29 03:11:14.760763 | orchestrator | 2026-03-29 03:11:14 | INFO  | Task ee01da38-00e3-4309-ae41-7995ebfda8d1 (nova) was prepared for execution. 2026-03-29 03:11:14.760853 | orchestrator | 2026-03-29 03:11:14 | INFO  | It takes a moment until task ee01da38-00e3-4309-ae41-7995ebfda8d1 (nova) has been started and output is visible here. 2026-03-29 03:13:25.620688 | orchestrator | 2026-03-29 03:13:25.620797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:13:25.620811 | orchestrator | 2026-03-29 03:13:25.620820 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-29 03:13:25.620834 | orchestrator | Sunday 29 March 2026 03:11:18 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-03-29 03:13:25.620851 | orchestrator | changed: [testbed-manager] 2026-03-29 03:13:25.620874 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.620887 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:13:25.620900 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:13:25.620913 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:13:25.620926 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:13:25.620938 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:13:25.620950 | orchestrator | 2026-03-29 03:13:25.620963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:13:25.620976 | orchestrator | Sunday 29 March 2026 03:11:19 +0000 (0:00:00.873) 0:00:01.164 ********** 2026-03-29 03:13:25.620989 | orchestrator | changed: [testbed-manager] 2026-03-29 03:13:25.621002 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621016 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:13:25.621029 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:13:25.621043 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:13:25.621058 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:13:25.621071 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:13:25.621084 | orchestrator | 2026-03-29 03:13:25.621093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:13:25.621103 | orchestrator | Sunday 29 March 2026 03:11:20 +0000 (0:00:00.863) 0:00:02.027 ********** 2026-03-29 03:13:25.621118 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-29 03:13:25.621141 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-29 03:13:25.621154 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-29 03:13:25.621166 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-29 03:13:25.621179 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-29 03:13:25.621192 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-29 03:13:25.621204 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-29 03:13:25.621217 | orchestrator | 2026-03-29 03:13:25.621229 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-29 03:13:25.621243 | orchestrator | 2026-03-29 03:13:25.621282 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 03:13:25.621299 | orchestrator | Sunday 29 March 2026 03:11:21 +0000 (0:00:00.727) 0:00:02.755 ********** 2026-03-29 03:13:25.621314 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:13:25.621328 | orchestrator | 2026-03-29 03:13:25.621371 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-29 03:13:25.621385 | orchestrator | Sunday 29 March 2026 03:11:22 +0000 (0:00:00.735) 0:00:03.490 ********** 2026-03-29 03:13:25.621399 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-29 03:13:25.621414 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-29 03:13:25.621428 | orchestrator | 2026-03-29 03:13:25.621442 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-29 03:13:25.621456 | orchestrator | Sunday 29 March 2026 03:11:26 +0000 (0:00:04.381) 0:00:07.872 ********** 2026-03-29 03:13:25.621469 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 03:13:25.621484 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 03:13:25.621498 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621512 | orchestrator | 2026-03-29 03:13:25.621525 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 03:13:25.621539 | orchestrator | Sunday 29 March 2026 03:11:31 +0000 (0:00:04.519) 0:00:12.391 ********** 2026-03-29 03:13:25.621548 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621558 | orchestrator | 2026-03-29 03:13:25.621567 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-29 03:13:25.621576 | orchestrator | Sunday 29 March 2026 03:11:31 +0000 (0:00:00.639) 0:00:13.031 ********** 2026-03-29 03:13:25.621586 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621595 | orchestrator | 2026-03-29 03:13:25.621603 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-29 03:13:25.621610 | orchestrator | Sunday 29 March 2026 03:11:33 +0000 (0:00:01.380) 0:00:14.411 ********** 2026-03-29 03:13:25.621618 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621625 | orchestrator | 2026-03-29 03:13:25.621633 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 03:13:25.621641 | orchestrator | Sunday 29 March 2026 03:11:35 +0000 (0:00:02.564) 0:00:16.976 ********** 2026-03-29 03:13:25.621649 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.621656 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.621664 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.621672 | orchestrator | 2026-03-29 03:13:25.621680 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 03:13:25.621688 | orchestrator | Sunday 29 March 2026 03:11:35 +0000 (0:00:00.322) 0:00:17.298 ********** 2026-03-29 03:13:25.621695 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:13:25.621703 | orchestrator | 2026-03-29 03:13:25.621711 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-29 03:13:25.621719 | orchestrator | Sunday 29 March 2026 03:12:10 +0000 (0:00:34.568) 0:00:51.866 ********** 2026-03-29 03:13:25.621727 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.621734 | orchestrator | 2026-03-29 03:13:25.621742 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 03:13:25.621749 | orchestrator | Sunday 29 March 2026 03:12:27 +0000 (0:00:16.451) 0:01:08.317 ********** 2026-03-29 03:13:25.621757 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:13:25.621765 | orchestrator | 2026-03-29 03:13:25.621787 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 03:13:25.621795 | orchestrator | Sunday 29 March 2026 03:12:41 +0000 (0:00:14.913) 0:01:23.231 ********** 2026-03-29 03:13:25.621821 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:13:25.621830 | orchestrator | 2026-03-29 03:13:25.621838 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-29 03:13:25.621845 | orchestrator | Sunday 29 March 2026 03:12:42 +0000 (0:00:00.707) 0:01:23.938 ********** 2026-03-29 03:13:25.621853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.621861 | orchestrator | 2026-03-29 03:13:25.621869 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 03:13:25.621876 | orchestrator | Sunday 29 March 2026 03:12:43 +0000 (0:00:00.461) 0:01:24.400 ********** 2026-03-29 03:13:25.621894 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:13:25.621902 | orchestrator | 2026-03-29 03:13:25.621910 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 03:13:25.621918 | orchestrator | Sunday 29 March 2026 03:12:43 +0000 (0:00:00.716) 0:01:25.116 ********** 2026-03-29 03:13:25.621926 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:13:25.621933 | orchestrator | 2026-03-29 03:13:25.621941 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 03:13:25.621949 | orchestrator | Sunday 29 March 2026 03:13:05 +0000 (0:00:21.641) 0:01:46.758 ********** 2026-03-29 03:13:25.621957 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.621964 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.621972 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.621980 | orchestrator | 2026-03-29 03:13:25.621988 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-29 03:13:25.621996 | orchestrator | 2026-03-29 03:13:25.622003 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 03:13:25.622011 | orchestrator | Sunday 29 March 2026 03:13:05 +0000 (0:00:00.276) 0:01:47.035 ********** 2026-03-29 03:13:25.622079 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:13:25.622087 | orchestrator | 2026-03-29 03:13:25.622095 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-29 03:13:25.622103 | orchestrator | Sunday 29 March 2026 03:13:06 +0000 (0:00:00.663) 0:01:47.698 ********** 2026-03-29 03:13:25.622111 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622119 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622126 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.622134 | orchestrator | 2026-03-29 03:13:25.622142 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-29 03:13:25.622150 | orchestrator | Sunday 29 March 2026 03:13:08 +0000 (0:00:02.304) 0:01:50.003 ********** 2026-03-29 03:13:25.622158 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622165 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622173 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.622181 | orchestrator | 2026-03-29 03:13:25.622189 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 03:13:25.622196 | orchestrator | Sunday 29 March 2026 03:13:11 +0000 (0:00:02.381) 0:01:52.384 ********** 2026-03-29 03:13:25.622204 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.622212 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622220 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622227 | orchestrator | 2026-03-29 03:13:25.622235 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 03:13:25.622243 | orchestrator | Sunday 29 March 2026 03:13:11 +0000 (0:00:00.531) 0:01:52.916 ********** 2026-03-29 03:13:25.622251 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 03:13:25.622273 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622281 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 03:13:25.622289 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622296 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 03:13:25.622304 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-29 03:13:25.622312 | orchestrator | 2026-03-29 03:13:25.622320 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 03:13:25.622328 | orchestrator | Sunday 29 March 2026 03:13:19 +0000 (0:00:08.357) 0:02:01.274 ********** 2026-03-29 03:13:25.622336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.622344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622351 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622359 | orchestrator | 2026-03-29 03:13:25.622367 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 03:13:25.622382 | orchestrator | Sunday 29 March 2026 03:13:20 +0000 (0:00:00.346) 0:02:01.620 ********** 2026-03-29 03:13:25.622390 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 03:13:25.622398 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:13:25.622406 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 03:13:25.622414 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622421 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 03:13:25.622429 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622437 | orchestrator | 2026-03-29 03:13:25.622445 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 03:13:25.622452 | orchestrator | Sunday 29 March 2026 03:13:21 +0000 (0:00:01.132) 0:02:02.753 ********** 2026-03-29 03:13:25.622460 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622468 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622476 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.622483 | orchestrator | 2026-03-29 03:13:25.622491 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-29 03:13:25.622499 | orchestrator | Sunday 29 March 2026 03:13:22 +0000 (0:00:00.563) 0:02:03.316 ********** 2026-03-29 03:13:25.622506 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622514 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622522 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:13:25.622529 | orchestrator | 2026-03-29 03:13:25.622537 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-29 03:13:25.622546 | orchestrator | Sunday 29 March 2026 03:13:23 +0000 (0:00:01.085) 0:02:04.402 ********** 2026-03-29 03:13:25.622553 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:13:25.622561 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:13:25.622576 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:14:52.879018 | orchestrator | 2026-03-29 03:14:52.879128 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-29 03:14:52.879145 | orchestrator | Sunday 29 March 2026 03:13:25 +0000 (0:00:02.504) 0:02:06.906 ********** 2026-03-29 03:14:52.879157 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879170 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879182 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:14:52.879194 | orchestrator | 2026-03-29 03:14:52.879242 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 03:14:52.879250 | orchestrator | Sunday 29 March 2026 03:13:49 +0000 (0:00:23.469) 0:02:30.376 ********** 2026-03-29 03:14:52.879257 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879263 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879270 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:14:52.879276 | orchestrator | 2026-03-29 03:14:52.879282 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 03:14:52.879289 | orchestrator | Sunday 29 March 2026 03:14:03 +0000 (0:00:13.976) 0:02:44.352 ********** 2026-03-29 03:14:52.879295 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:14:52.879301 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879308 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879314 | orchestrator | 2026-03-29 03:14:52.879320 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-29 03:14:52.879326 | orchestrator | Sunday 29 March 2026 03:14:04 +0000 (0:00:01.063) 0:02:45.416 ********** 2026-03-29 03:14:52.879333 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879339 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879345 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:14:52.879351 | orchestrator | 2026-03-29 03:14:52.879357 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-29 03:14:52.879363 | orchestrator | Sunday 29 March 2026 03:14:18 +0000 (0:00:14.580) 0:02:59.996 ********** 2026-03-29 03:14:52.879369 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:52.879376 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879383 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879417 | orchestrator | 2026-03-29 03:14:52.879428 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 03:14:52.879437 | orchestrator | Sunday 29 March 2026 03:14:19 +0000 (0:00:01.061) 0:03:01.058 ********** 2026-03-29 03:14:52.879444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:52.879450 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:52.879456 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:52.879462 | orchestrator | 2026-03-29 03:14:52.879468 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-29 03:14:52.879474 | orchestrator | 2026-03-29 03:14:52.879481 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 03:14:52.879487 | orchestrator | Sunday 29 March 2026 03:14:20 +0000 (0:00:00.392) 0:03:01.450 ********** 2026-03-29 03:14:52.879493 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:14:52.879500 | orchestrator | 2026-03-29 03:14:52.879543 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-29 03:14:52.879550 | orchestrator | Sunday 29 March 2026 03:14:20 +0000 (0:00:00.758) 0:03:02.208 ********** 2026-03-29 03:14:52.879556 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-29 03:14:52.879562 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-29 03:14:52.879568 | orchestrator | 2026-03-29 03:14:52.879574 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-29 03:14:52.879580 | orchestrator | Sunday 29 March 2026 03:14:24 +0000 (0:00:03.707) 0:03:05.916 ********** 2026-03-29 03:14:52.879588 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-29 03:14:52.879597 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-29 03:14:52.879605 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-29 03:14:52.879613 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-29 03:14:52.879620 | orchestrator | 2026-03-29 03:14:52.879628 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-29 03:14:52.879635 | orchestrator | Sunday 29 March 2026 03:14:31 +0000 (0:00:07.378) 0:03:13.295 ********** 2026-03-29 03:14:52.879643 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:14:52.879650 | orchestrator | 2026-03-29 03:14:52.879656 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-29 03:14:52.879664 | orchestrator | Sunday 29 March 2026 03:14:35 +0000 (0:00:03.633) 0:03:16.928 ********** 2026-03-29 03:14:52.879671 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:14:52.879678 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-29 03:14:52.879685 | orchestrator | 2026-03-29 03:14:52.879692 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-29 03:14:52.879699 | orchestrator | Sunday 29 March 2026 03:14:39 +0000 (0:00:04.235) 0:03:21.163 ********** 2026-03-29 03:14:52.879707 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:14:52.879714 | orchestrator | 2026-03-29 03:14:52.879721 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-29 03:14:52.879728 | orchestrator | Sunday 29 March 2026 03:14:43 +0000 (0:00:03.445) 0:03:24.609 ********** 2026-03-29 03:14:52.879735 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-29 03:14:52.879748 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-29 03:14:52.879755 | orchestrator | 2026-03-29 03:14:52.879763 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 03:14:52.879785 | orchestrator | Sunday 29 March 2026 03:14:51 +0000 (0:00:08.204) 0:03:32.814 ********** 2026-03-29 03:14:52.879806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:52.879817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:52.879826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:52.879843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432478 | orchestrator | 2026-03-29 03:14:57.432484 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-29 03:14:57.432491 | orchestrator | Sunday 29 March 2026 03:14:52 +0000 (0:00:01.352) 0:03:34.166 ********** 2026-03-29 03:14:57.432495 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:57.432500 | orchestrator | 2026-03-29 03:14:57.432504 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-29 03:14:57.432508 | orchestrator | Sunday 29 March 2026 03:14:52 +0000 (0:00:00.128) 0:03:34.295 ********** 2026-03-29 03:14:57.432512 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:57.432516 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:57.432519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:57.432523 | orchestrator | 2026-03-29 03:14:57.432527 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-29 03:14:57.432531 | orchestrator | Sunday 29 March 2026 03:14:53 +0000 (0:00:00.312) 0:03:34.607 ********** 2026-03-29 03:14:57.432535 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:14:57.432539 | orchestrator | 2026-03-29 03:14:57.432548 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-29 03:14:57.432552 | orchestrator | Sunday 29 March 2026 03:14:53 +0000 (0:00:00.667) 0:03:35.275 ********** 2026-03-29 03:14:57.432556 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:57.432560 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:57.432564 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:57.432567 | orchestrator | 2026-03-29 03:14:57.432571 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 03:14:57.432575 | orchestrator | Sunday 29 March 2026 03:14:54 +0000 (0:00:00.497) 0:03:35.772 ********** 2026-03-29 03:14:57.432579 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:14:57.432585 | orchestrator | 2026-03-29 03:14:57.432588 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 03:14:57.432592 | orchestrator | Sunday 29 March 2026 03:14:55 +0000 (0:00:00.585) 0:03:36.358 ********** 2026-03-29 03:14:57.432612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:57.432643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:57.432649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:14:57.432653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:14:57.432672 | orchestrator | 2026-03-29 03:14:57.432679 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 03:14:59.118380 | orchestrator | Sunday 29 March 2026 03:14:57 +0000 (0:00:02.360) 0:03:38.718 ********** 2026-03-29 03:14:59.118485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:14:59.118502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:14:59.118513 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:59.118523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:14:59.118567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:14:59.118576 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:14:59.118602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:14:59.118612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:14:59.118620 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:14:59.118629 | orchestrator | 2026-03-29 03:14:59.118639 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 03:14:59.118647 | orchestrator | Sunday 29 March 2026 03:14:58 +0000 (0:00:00.820) 0:03:39.539 ********** 2026-03-29 03:14:59.118656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:14:59.118676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:14:59.118685 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:14:59.118700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:15:01.524550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:15:01.524634 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:01.524647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:15:01.524677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:15:01.524685 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:01.524692 | orchestrator | 2026-03-29 03:15:01.524699 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-29 03:15:01.524707 | orchestrator | Sunday 29 March 2026 03:14:59 +0000 (0:00:00.868) 0:03:40.408 ********** 2026-03-29 03:15:01.524726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:01.524748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:01.524763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:01.524774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:01.524781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:01.524793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:07.946179 | orchestrator | 2026-03-29 03:15:07.946318 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-29 03:15:07.946331 | orchestrator | Sunday 29 March 2026 03:15:01 +0000 (0:00:02.409) 0:03:42.817 ********** 2026-03-29 03:15:07.946343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:07.946388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:07.946411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:07.946435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:07.946445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:07.946458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:07.946466 | orchestrator | 2026-03-29 03:15:07.946474 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-29 03:15:07.946482 | orchestrator | Sunday 29 March 2026 03:15:07 +0000 (0:00:05.831) 0:03:48.649 ********** 2026-03-29 03:15:07.946494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:15:07.946503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:15:07.946511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:07.946526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:15:12.362575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:15:12.362702 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:12.362725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 03:15:12.362760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:15:12.362773 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:12.362784 | orchestrator | 2026-03-29 03:15:12.362797 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-29 03:15:12.362810 | orchestrator | Sunday 29 March 2026 03:15:07 +0000 (0:00:00.590) 0:03:49.240 ********** 2026-03-29 03:15:12.362821 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:15:12.362832 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:15:12.362842 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:15:12.362853 | orchestrator | 2026-03-29 03:15:12.362864 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-29 03:15:12.362875 | orchestrator | Sunday 29 March 2026 03:15:09 +0000 (0:00:01.553) 0:03:50.793 ********** 2026-03-29 03:15:12.362885 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:12.362896 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:12.362907 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:12.362919 | orchestrator | 2026-03-29 03:15:12.362929 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-29 03:15:12.362965 | orchestrator | Sunday 29 March 2026 03:15:09 +0000 (0:00:00.333) 0:03:51.127 ********** 2026-03-29 03:15:12.362999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:12.363013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:12.363035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 03:15:12.363050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:12.363072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:12.363094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:57.185011 | orchestrator | 2026-03-29 03:15:57.185109 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 03:15:57.185120 | orchestrator | Sunday 29 March 2026 03:15:11 +0000 (0:00:02.107) 0:03:53.234 ********** 2026-03-29 03:15:57.185127 | orchestrator | 2026-03-29 03:15:57.185135 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 03:15:57.185142 | orchestrator | Sunday 29 March 2026 03:15:12 +0000 (0:00:00.139) 0:03:53.373 ********** 2026-03-29 03:15:57.185148 | orchestrator | 2026-03-29 03:15:57.185155 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 03:15:57.185162 | orchestrator | Sunday 29 March 2026 03:15:12 +0000 (0:00:00.139) 0:03:53.513 ********** 2026-03-29 03:15:57.185231 | orchestrator | 2026-03-29 03:15:57.185239 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-29 03:15:57.185247 | orchestrator | Sunday 29 March 2026 03:15:12 +0000 (0:00:00.138) 0:03:53.651 ********** 2026-03-29 03:15:57.185270 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:15:57.185277 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:15:57.185281 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:15:57.185286 | orchestrator | 2026-03-29 03:15:57.185290 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-29 03:15:57.185302 | orchestrator | Sunday 29 March 2026 03:15:34 +0000 (0:00:22.224) 0:04:15.876 ********** 2026-03-29 03:15:57.185307 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:15:57.185311 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:15:57.185316 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:15:57.185320 | orchestrator | 2026-03-29 03:15:57.185324 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-29 03:15:57.185328 | orchestrator | 2026-03-29 03:15:57.185338 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 03:15:57.185343 | orchestrator | Sunday 29 March 2026 03:15:44 +0000 (0:00:10.230) 0:04:26.107 ********** 2026-03-29 03:15:57.185359 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:15:57.185365 | orchestrator | 2026-03-29 03:15:57.185369 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 03:15:57.185389 | orchestrator | Sunday 29 March 2026 03:15:45 +0000 (0:00:01.184) 0:04:27.291 ********** 2026-03-29 03:15:57.185394 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:15:57.185399 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:15:57.185403 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:15:57.185407 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:57.185412 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:57.185416 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:57.185420 | orchestrator | 2026-03-29 03:15:57.185424 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-29 03:15:57.185428 | orchestrator | Sunday 29 March 2026 03:15:46 +0000 (0:00:00.744) 0:04:28.035 ********** 2026-03-29 03:15:57.185432 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:57.185437 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:57.185441 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:57.185445 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:15:57.185453 | orchestrator | 2026-03-29 03:15:57.185460 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 03:15:57.185467 | orchestrator | Sunday 29 March 2026 03:15:47 +0000 (0:00:00.865) 0:04:28.901 ********** 2026-03-29 03:15:57.185474 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-29 03:15:57.185482 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-29 03:15:57.185488 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-29 03:15:57.185494 | orchestrator | 2026-03-29 03:15:57.185500 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 03:15:57.185506 | orchestrator | Sunday 29 March 2026 03:15:48 +0000 (0:00:00.897) 0:04:29.798 ********** 2026-03-29 03:15:57.185514 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-29 03:15:57.185520 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-29 03:15:57.185527 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-29 03:15:57.185533 | orchestrator | 2026-03-29 03:15:57.185539 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 03:15:57.185545 | orchestrator | Sunday 29 March 2026 03:15:49 +0000 (0:00:01.209) 0:04:31.008 ********** 2026-03-29 03:15:57.185551 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-29 03:15:57.185559 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:15:57.185566 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-29 03:15:57.185573 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:15:57.185581 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-29 03:15:57.185589 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:15:57.185596 | orchestrator | 2026-03-29 03:15:57.185603 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-29 03:15:57.185609 | orchestrator | Sunday 29 March 2026 03:15:50 +0000 (0:00:00.568) 0:04:31.576 ********** 2026-03-29 03:15:57.185616 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 03:15:57.185622 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 03:15:57.185629 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 03:15:57.185635 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:57.185643 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 03:15:57.185649 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 03:15:57.185656 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:57.185663 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 03:15:57.185686 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 03:15:57.185695 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:57.185702 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 03:15:57.185719 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 03:15:57.185726 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 03:15:57.185733 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 03:15:57.185740 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 03:15:57.185747 | orchestrator | 2026-03-29 03:15:57.185755 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-29 03:15:57.185762 | orchestrator | Sunday 29 March 2026 03:15:52 +0000 (0:00:02.030) 0:04:33.607 ********** 2026-03-29 03:15:57.185770 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:57.185777 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:57.185782 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:57.185787 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:15:57.185791 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:15:57.185796 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:15:57.185801 | orchestrator | 2026-03-29 03:15:57.185806 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-29 03:15:57.185810 | orchestrator | Sunday 29 March 2026 03:15:53 +0000 (0:00:01.197) 0:04:34.804 ********** 2026-03-29 03:15:57.185815 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:15:57.185820 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:15:57.185825 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:15:57.185830 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:15:57.185834 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:15:57.185839 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:15:57.185844 | orchestrator | 2026-03-29 03:15:57.185855 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 03:15:57.185860 | orchestrator | Sunday 29 March 2026 03:15:55 +0000 (0:00:01.820) 0:04:36.624 ********** 2026-03-29 03:15:57.185867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:15:57.185876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:15:57.185887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:15:58.956346 | orchestrator | 2026-03-29 03:15:58.956356 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 03:15:58.956369 | orchestrator | Sunday 29 March 2026 03:15:57 +0000 (0:00:02.337) 0:04:38.961 ********** 2026-03-29 03:15:58.956384 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:15:58.956398 | orchestrator | 2026-03-29 03:15:58.956412 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 03:15:58.956435 | orchestrator | Sunday 29 March 2026 03:15:58 +0000 (0:00:01.281) 0:04:40.243 ********** 2026-03-29 03:16:02.264745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:02.264934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:04.033437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:04.033559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:04.033573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:04.033580 | orchestrator | 2026-03-29 03:16:04.033602 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 03:16:04.033609 | orchestrator | Sunday 29 March 2026 03:16:02 +0000 (0:00:03.755) 0:04:43.999 ********** 2026-03-29 03:16:04.033619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:04.033628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:04.033655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:04.033665 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:04.033680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:04.033689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:04.033704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:04.033709 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:04.033714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:04.033725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:05.649131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:05.649251 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:05.649282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:05.649293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:05.649321 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:05.649331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:05.649339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:05.649348 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:05.649356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:05.649380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:05.649389 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:05.649398 | orchestrator | 2026-03-29 03:16:05.649407 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 03:16:05.649416 | orchestrator | Sunday 29 March 2026 03:16:04 +0000 (0:00:01.655) 0:04:45.655 ********** 2026-03-29 03:16:05.649431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:05.649447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:05.649457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:05.649465 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:05.649473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:05.649489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:09.957257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:09.957354 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:09.957363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:09.957368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:09.957374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:09.957378 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:09.957383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:09.957399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:09.957403 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:09.957412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:09.957419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:09.957423 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:09.957427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:16:09.957431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:16:09.957435 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:09.957439 | orchestrator | 2026-03-29 03:16:09.957444 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 03:16:09.957449 | orchestrator | Sunday 29 March 2026 03:16:06 +0000 (0:00:01.982) 0:04:47.638 ********** 2026-03-29 03:16:09.957453 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:09.957457 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:09.957460 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:09.957465 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:16:09.957469 | orchestrator | 2026-03-29 03:16:09.957473 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-29 03:16:09.957477 | orchestrator | Sunday 29 March 2026 03:16:07 +0000 (0:00:01.076) 0:04:48.715 ********** 2026-03-29 03:16:09.957481 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:16:09.957485 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:16:09.957489 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:16:09.957492 | orchestrator | 2026-03-29 03:16:09.957496 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-29 03:16:09.957500 | orchestrator | Sunday 29 March 2026 03:16:08 +0000 (0:00:01.085) 0:04:49.800 ********** 2026-03-29 03:16:09.957504 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:16:09.957507 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:16:09.957515 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:16:09.957519 | orchestrator | 2026-03-29 03:16:09.957522 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-29 03:16:09.957526 | orchestrator | Sunday 29 March 2026 03:16:09 +0000 (0:00:00.950) 0:04:50.751 ********** 2026-03-29 03:16:09.957530 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:16:09.957534 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:16:09.957538 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:16:09.957542 | orchestrator | 2026-03-29 03:16:09.957548 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-29 03:16:30.716904 | orchestrator | Sunday 29 March 2026 03:16:09 +0000 (0:00:00.498) 0:04:51.249 ********** 2026-03-29 03:16:30.717003 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:16:30.717015 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:16:30.717021 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:16:30.717028 | orchestrator | 2026-03-29 03:16:30.717035 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-29 03:16:30.717043 | orchestrator | Sunday 29 March 2026 03:16:10 +0000 (0:00:00.513) 0:04:51.763 ********** 2026-03-29 03:16:30.717050 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 03:16:30.717056 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 03:16:30.717072 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 03:16:30.717076 | orchestrator | 2026-03-29 03:16:30.717080 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-29 03:16:30.717084 | orchestrator | Sunday 29 March 2026 03:16:11 +0000 (0:00:01.384) 0:04:53.147 ********** 2026-03-29 03:16:30.717088 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 03:16:30.717092 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 03:16:30.717096 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 03:16:30.717100 | orchestrator | 2026-03-29 03:16:30.717103 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-29 03:16:30.717107 | orchestrator | Sunday 29 March 2026 03:16:13 +0000 (0:00:01.220) 0:04:54.367 ********** 2026-03-29 03:16:30.717111 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 03:16:30.717115 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 03:16:30.717119 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 03:16:30.717122 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-29 03:16:30.717126 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-29 03:16:30.717130 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-29 03:16:30.717133 | orchestrator | 2026-03-29 03:16:30.717137 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-29 03:16:30.717141 | orchestrator | Sunday 29 March 2026 03:16:16 +0000 (0:00:03.777) 0:04:58.145 ********** 2026-03-29 03:16:30.717159 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:30.717168 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:30.717172 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:30.717175 | orchestrator | 2026-03-29 03:16:30.717179 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-29 03:16:30.717183 | orchestrator | Sunday 29 March 2026 03:16:17 +0000 (0:00:00.309) 0:04:58.455 ********** 2026-03-29 03:16:30.717187 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:30.717190 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:30.717194 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:30.717198 | orchestrator | 2026-03-29 03:16:30.717202 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-29 03:16:30.717206 | orchestrator | Sunday 29 March 2026 03:16:17 +0000 (0:00:00.502) 0:04:58.957 ********** 2026-03-29 03:16:30.717210 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:16:30.717213 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:16:30.717217 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:16:30.717238 | orchestrator | 2026-03-29 03:16:30.717242 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-29 03:16:30.717246 | orchestrator | Sunday 29 March 2026 03:16:18 +0000 (0:00:01.302) 0:05:00.260 ********** 2026-03-29 03:16:30.717250 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 03:16:30.717256 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 03:16:30.717260 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 03:16:30.717263 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 03:16:30.717267 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 03:16:30.717271 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 03:16:30.717275 | orchestrator | 2026-03-29 03:16:30.717279 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-29 03:16:30.717283 | orchestrator | Sunday 29 March 2026 03:16:22 +0000 (0:00:03.394) 0:05:03.654 ********** 2026-03-29 03:16:30.717286 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 03:16:30.717297 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 03:16:30.717301 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 03:16:30.717310 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 03:16:30.717314 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:16:30.717317 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 03:16:30.717321 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:16:30.717325 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 03:16:30.717329 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:16:30.717332 | orchestrator | 2026-03-29 03:16:30.717336 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-29 03:16:30.717340 | orchestrator | Sunday 29 March 2026 03:16:25 +0000 (0:00:03.062) 0:05:06.717 ********** 2026-03-29 03:16:30.717344 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:30.717348 | orchestrator | 2026-03-29 03:16:30.717364 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-29 03:16:30.717368 | orchestrator | Sunday 29 March 2026 03:16:25 +0000 (0:00:00.156) 0:05:06.873 ********** 2026-03-29 03:16:30.717372 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:30.717376 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:30.717380 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:30.717383 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:30.717387 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:30.717391 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:30.717394 | orchestrator | 2026-03-29 03:16:30.717398 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-29 03:16:30.717402 | orchestrator | Sunday 29 March 2026 03:16:26 +0000 (0:00:00.691) 0:05:07.565 ********** 2026-03-29 03:16:30.717410 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:16:30.717414 | orchestrator | 2026-03-29 03:16:30.717417 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-29 03:16:30.717421 | orchestrator | Sunday 29 March 2026 03:16:26 +0000 (0:00:00.597) 0:05:08.163 ********** 2026-03-29 03:16:30.717426 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:30.717430 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:30.717434 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:30.717438 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:30.717443 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:30.717452 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:30.717456 | orchestrator | 2026-03-29 03:16:30.717460 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-29 03:16:30.717465 | orchestrator | Sunday 29 March 2026 03:16:27 +0000 (0:00:00.660) 0:05:08.823 ********** 2026-03-29 03:16:30.717471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:30.717479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:30.717484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:16:30.717495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:35.399476 | orchestrator | 2026-03-29 03:16:35.399482 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-29 03:16:35.399488 | orchestrator | Sunday 29 March 2026 03:16:31 +0000 (0:00:03.573) 0:05:12.397 ********** 2026-03-29 03:16:35.399497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:37.772121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:37.772285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:37.772314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:37.772332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:37.772352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:37.772409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:16:37.772588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:55.097919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:55.098075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:16:55.098088 | orchestrator | 2026-03-29 03:16:55.098103 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-29 03:16:55.098111 | orchestrator | Sunday 29 March 2026 03:16:37 +0000 (0:00:06.667) 0:05:19.064 ********** 2026-03-29 03:16:55.098117 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:55.098124 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:55.098164 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:55.098172 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:55.098178 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:55.098183 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:55.098189 | orchestrator | 2026-03-29 03:16:55.098195 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-29 03:16:55.098201 | orchestrator | Sunday 29 March 2026 03:16:39 +0000 (0:00:01.367) 0:05:20.432 ********** 2026-03-29 03:16:55.098208 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 03:16:55.098214 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 03:16:55.098220 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 03:16:55.098226 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 03:16:55.098232 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 03:16:55.098238 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 03:16:55.098245 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:55.098251 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 03:16:55.098256 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:55.098262 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 03:16:55.098268 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:55.098274 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 03:16:55.098299 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 03:16:55.098305 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 03:16:55.098312 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 03:16:55.098318 | orchestrator | 2026-03-29 03:16:55.098334 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-29 03:16:55.098340 | orchestrator | Sunday 29 March 2026 03:16:42 +0000 (0:00:03.232) 0:05:23.665 ********** 2026-03-29 03:16:55.098353 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:55.098359 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:55.098364 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:55.098370 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:55.098376 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:55.098382 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:55.098387 | orchestrator | 2026-03-29 03:16:55.098393 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-29 03:16:55.098399 | orchestrator | Sunday 29 March 2026 03:16:42 +0000 (0:00:00.627) 0:05:24.292 ********** 2026-03-29 03:16:55.098405 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 03:16:55.098412 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 03:16:55.098418 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 03:16:55.098424 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 03:16:55.098449 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 03:16:55.098457 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098463 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 03:16:55.098470 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098484 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:55.098497 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098503 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:55.098510 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 03:16:55.098517 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:55.098524 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098531 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098537 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098544 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098551 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098557 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 03:16:55.098569 | orchestrator | 2026-03-29 03:16:55.098577 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-29 03:16:55.098583 | orchestrator | Sunday 29 March 2026 03:16:47 +0000 (0:00:04.949) 0:05:29.241 ********** 2026-03-29 03:16:55.098591 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 03:16:55.098598 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 03:16:55.098604 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 03:16:55.098613 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 03:16:55.098623 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:16:55.098633 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:16:55.098642 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 03:16:55.098652 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 03:16:55.098662 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 03:16:55.098671 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 03:16:55.098680 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 03:16:55.098690 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 03:16:55.098699 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 03:16:55.098708 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:55.098717 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:16:55.098729 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 03:16:55.098739 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:55.098751 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 03:16:55.098759 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:55.098766 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:16:55.098773 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 03:16:55.098780 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:16:55.098787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:16:55.098794 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 03:16:55.098802 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:16:55.098820 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:16:59.910822 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 03:16:59.910923 | orchestrator | 2026-03-29 03:16:59.910944 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-29 03:16:59.910959 | orchestrator | Sunday 29 March 2026 03:16:55 +0000 (0:00:07.135) 0:05:36.377 ********** 2026-03-29 03:16:59.910973 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:59.910986 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:59.910999 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:59.911012 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:59.911024 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:59.911037 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:59.911094 | orchestrator | 2026-03-29 03:16:59.911119 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-29 03:16:59.911231 | orchestrator | Sunday 29 March 2026 03:16:55 +0000 (0:00:00.813) 0:05:37.190 ********** 2026-03-29 03:16:59.911247 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:59.911255 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:59.911262 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:16:59.911270 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:59.911277 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:59.911284 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:59.911291 | orchestrator | 2026-03-29 03:16:59.911298 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-29 03:16:59.911307 | orchestrator | Sunday 29 March 2026 03:16:56 +0000 (0:00:00.648) 0:05:37.838 ********** 2026-03-29 03:16:59.911314 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:16:59.911321 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:16:59.911328 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:16:59.911336 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:16:59.911344 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:16:59.911353 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:16:59.911366 | orchestrator | 2026-03-29 03:16:59.911378 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-29 03:16:59.911390 | orchestrator | Sunday 29 March 2026 03:16:58 +0000 (0:00:02.394) 0:05:40.233 ********** 2026-03-29 03:16:59.911407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:59.911424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:59.911440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:59.911455 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:16:59.911497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:59.911516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:59.911525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:16:59.911533 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:16:59.911542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 03:16:59.911554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 03:16:59.911580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 03:17:03.264059 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:17:03.264170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:17:03.264184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:17:03.264192 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:17:03.264200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:17:03.264207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:17:03.264211 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:17:03.264215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 03:17:03.264219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:17:03.264239 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:17:03.264243 | orchestrator | 2026-03-29 03:17:03.264248 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-29 03:17:03.264263 | orchestrator | Sunday 29 March 2026 03:17:00 +0000 (0:00:01.254) 0:05:41.488 ********** 2026-03-29 03:17:03.264268 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 03:17:03.264285 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264290 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:17:03.264294 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 03:17:03.264297 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264301 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:17:03.264305 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 03:17:03.264309 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264312 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:17:03.264316 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 03:17:03.264320 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264323 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:17:03.264327 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 03:17:03.264331 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264335 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:17:03.264338 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 03:17:03.264342 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 03:17:03.264346 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:17:03.264350 | orchestrator | 2026-03-29 03:17:03.264354 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-29 03:17:03.264358 | orchestrator | Sunday 29 March 2026 03:17:00 +0000 (0:00:00.756) 0:05:42.244 ********** 2026-03-29 03:17:03.264362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:17:03.264368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:17:03.264377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 03:17:03.264390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 03:17:05.614931 | orchestrator | 2026-03-29 03:17:05.614944 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 03:17:05.614956 | orchestrator | Sunday 29 March 2026 03:17:03 +0000 (0:00:02.861) 0:05:45.105 ********** 2026-03-29 03:17:05.614968 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:17:05.614979 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:17:05.614990 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:17:05.615000 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:17:05.615011 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:17:05.615021 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:17:05.615032 | orchestrator | 2026-03-29 03:17:05.615043 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:17:05.615054 | orchestrator | Sunday 29 March 2026 03:17:04 +0000 (0:00:00.837) 0:05:45.943 ********** 2026-03-29 03:17:05.615065 | orchestrator | 2026-03-29 03:17:05.615095 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:17:05.615111 | orchestrator | Sunday 29 March 2026 03:17:04 +0000 (0:00:00.161) 0:05:46.104 ********** 2026-03-29 03:17:05.615143 | orchestrator | 2026-03-29 03:17:05.615157 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:17:05.615171 | orchestrator | Sunday 29 March 2026 03:17:04 +0000 (0:00:00.130) 0:05:46.234 ********** 2026-03-29 03:17:05.615183 | orchestrator | 2026-03-29 03:17:05.615195 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:17:05.615214 | orchestrator | Sunday 29 March 2026 03:17:05 +0000 (0:00:00.131) 0:05:46.366 ********** 2026-03-29 03:20:04.749684 | orchestrator | 2026-03-29 03:20:04.749857 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:20:04.749881 | orchestrator | Sunday 29 March 2026 03:17:05 +0000 (0:00:00.145) 0:05:46.512 ********** 2026-03-29 03:20:04.749890 | orchestrator | 2026-03-29 03:20:04.749899 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 03:20:04.749907 | orchestrator | Sunday 29 March 2026 03:17:05 +0000 (0:00:00.252) 0:05:46.764 ********** 2026-03-29 03:20:04.749915 | orchestrator | 2026-03-29 03:20:04.749924 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-29 03:20:04.749932 | orchestrator | Sunday 29 March 2026 03:17:05 +0000 (0:00:00.129) 0:05:46.893 ********** 2026-03-29 03:20:04.749940 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:20:04.749950 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:20:04.749958 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:20:04.749966 | orchestrator | 2026-03-29 03:20:04.749974 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-29 03:20:04.749982 | orchestrator | Sunday 29 March 2026 03:17:11 +0000 (0:00:06.309) 0:05:53.203 ********** 2026-03-29 03:20:04.749990 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:20:04.750198 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:20:04.750221 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:20:04.750230 | orchestrator | 2026-03-29 03:20:04.750241 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-29 03:20:04.750251 | orchestrator | Sunday 29 March 2026 03:17:29 +0000 (0:00:17.818) 0:06:11.022 ********** 2026-03-29 03:20:04.750260 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:20:04.750270 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:20:04.750279 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:20:04.750288 | orchestrator | 2026-03-29 03:20:04.750298 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-29 03:20:04.750307 | orchestrator | Sunday 29 March 2026 03:17:49 +0000 (0:00:19.984) 0:06:31.006 ********** 2026-03-29 03:20:04.750316 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:20:04.750325 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:20:04.750334 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:20:04.750343 | orchestrator | 2026-03-29 03:20:04.750353 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-29 03:20:04.750362 | orchestrator | Sunday 29 March 2026 03:18:21 +0000 (0:00:31.826) 0:07:02.833 ********** 2026-03-29 03:20:04.750376 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-29 03:20:04.750392 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-29 03:20:04.750404 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-29 03:20:04.750417 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:20:04.750429 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:20:04.750442 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:20:04.750454 | orchestrator | 2026-03-29 03:20:04.750466 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-29 03:20:04.750480 | orchestrator | Sunday 29 March 2026 03:18:27 +0000 (0:00:06.316) 0:07:09.149 ********** 2026-03-29 03:20:04.750492 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:20:04.750504 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:20:04.750517 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:20:04.750529 | orchestrator | 2026-03-29 03:20:04.750542 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-29 03:20:04.750555 | orchestrator | Sunday 29 March 2026 03:18:28 +0000 (0:00:00.787) 0:07:09.937 ********** 2026-03-29 03:20:04.750568 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:20:04.750581 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:20:04.750595 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:20:04.750608 | orchestrator | 2026-03-29 03:20:04.750622 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-29 03:20:04.750638 | orchestrator | Sunday 29 March 2026 03:18:54 +0000 (0:00:26.359) 0:07:36.297 ********** 2026-03-29 03:20:04.750648 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:20:04.750656 | orchestrator | 2026-03-29 03:20:04.750664 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-29 03:20:04.750672 | orchestrator | Sunday 29 March 2026 03:18:55 +0000 (0:00:00.141) 0:07:36.439 ********** 2026-03-29 03:20:04.750681 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:20:04.750688 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:20:04.750696 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:04.750704 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:04.750712 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:04.750721 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-29 03:20:04.750730 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 03:20:04.750739 | orchestrator | 2026-03-29 03:20:04.750746 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-29 03:20:04.750775 | orchestrator | Sunday 29 March 2026 03:19:18 +0000 (0:00:23.120) 0:07:59.559 ********** 2026-03-29 03:20:04.750790 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:04.750803 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:20:04.750816 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:04.750830 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:04.750857 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:20:04.750865 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:20:04.750872 | orchestrator | 2026-03-29 03:20:04.750880 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-29 03:20:04.750888 | orchestrator | Sunday 29 March 2026 03:19:27 +0000 (0:00:09.371) 0:08:08.931 ********** 2026-03-29 03:20:04.750900 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:20:04.750912 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:20:04.750923 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:04.750944 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:04.750959 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:04.751003 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-29 03:20:04.751018 | orchestrator | 2026-03-29 03:20:04.751057 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 03:20:04.751070 | orchestrator | Sunday 29 March 2026 03:19:31 +0000 (0:00:03.556) 0:08:12.487 ********** 2026-03-29 03:20:04.751082 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 03:20:04.751094 | orchestrator | 2026-03-29 03:20:04.751105 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 03:20:04.751116 | orchestrator | Sunday 29 March 2026 03:19:45 +0000 (0:00:13.876) 0:08:26.363 ********** 2026-03-29 03:20:04.751128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 03:20:04.751140 | orchestrator | 2026-03-29 03:20:04.751152 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-29 03:20:04.751164 | orchestrator | Sunday 29 March 2026 03:19:46 +0000 (0:00:01.570) 0:08:27.934 ********** 2026-03-29 03:20:04.751176 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:20:04.751189 | orchestrator | 2026-03-29 03:20:04.751201 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-29 03:20:04.751213 | orchestrator | Sunday 29 March 2026 03:19:48 +0000 (0:00:01.727) 0:08:29.662 ********** 2026-03-29 03:20:04.751226 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 03:20:04.751238 | orchestrator | 2026-03-29 03:20:04.751250 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-29 03:20:04.751262 | orchestrator | Sunday 29 March 2026 03:20:00 +0000 (0:00:12.267) 0:08:41.930 ********** 2026-03-29 03:20:04.751273 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:20:04.751287 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:20:04.751299 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:20:04.751312 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:04.751325 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:04.751337 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:04.751348 | orchestrator | 2026-03-29 03:20:04.751361 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-29 03:20:04.751374 | orchestrator | 2026-03-29 03:20:04.751387 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-29 03:20:04.751398 | orchestrator | Sunday 29 March 2026 03:20:02 +0000 (0:00:01.909) 0:08:43.839 ********** 2026-03-29 03:20:04.751410 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:20:04.751422 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:20:04.751434 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:20:04.751447 | orchestrator | 2026-03-29 03:20:04.751459 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-29 03:20:04.751471 | orchestrator | 2026-03-29 03:20:04.751482 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-29 03:20:04.751495 | orchestrator | Sunday 29 March 2026 03:20:03 +0000 (0:00:00.961) 0:08:44.801 ********** 2026-03-29 03:20:04.751524 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:04.751537 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:04.751548 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:04.751559 | orchestrator | 2026-03-29 03:20:04.751571 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-29 03:20:04.751583 | orchestrator | 2026-03-29 03:20:04.751596 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-29 03:20:04.751608 | orchestrator | Sunday 29 March 2026 03:20:04 +0000 (0:00:00.697) 0:08:45.498 ********** 2026-03-29 03:20:04.751620 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-29 03:20:04.751632 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 03:20:04.751645 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 03:20:04.751659 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-29 03:20:04.751671 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-29 03:20:04.751682 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:04.751694 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:20:04.751706 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-29 03:20:04.751718 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 03:20:04.751731 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 03:20:04.751744 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-29 03:20:04.751756 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-29 03:20:04.751770 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:04.751782 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:20:04.751795 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-29 03:20:04.751808 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 03:20:04.751821 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 03:20:04.751834 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-29 03:20:04.751848 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-29 03:20:04.751860 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:04.751873 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:20:04.751886 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-29 03:20:04.751912 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 03:20:04.751925 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 03:20:04.751938 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-29 03:20:04.751950 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-29 03:20:04.751964 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:04.751976 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:04.751989 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-29 03:20:04.752049 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 03:20:07.956456 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 03:20:07.956582 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-29 03:20:07.956596 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-29 03:20:07.956607 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:07.956618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:07.956628 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-29 03:20:07.956638 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 03:20:07.956647 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 03:20:07.956686 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-29 03:20:07.956695 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-29 03:20:07.956704 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-29 03:20:07.956713 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:07.956722 | orchestrator | 2026-03-29 03:20:07.956732 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-29 03:20:07.956741 | orchestrator | 2026-03-29 03:20:07.956750 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-29 03:20:07.956760 | orchestrator | Sunday 29 March 2026 03:20:05 +0000 (0:00:01.346) 0:08:46.845 ********** 2026-03-29 03:20:07.956768 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-29 03:20:07.956778 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-29 03:20:07.956787 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:07.956796 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-29 03:20:07.956805 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-29 03:20:07.956813 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:07.956822 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-29 03:20:07.956831 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-29 03:20:07.956839 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:07.956848 | orchestrator | 2026-03-29 03:20:07.956857 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-29 03:20:07.956866 | orchestrator | 2026-03-29 03:20:07.956875 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-29 03:20:07.956884 | orchestrator | Sunday 29 March 2026 03:20:06 +0000 (0:00:00.568) 0:08:47.413 ********** 2026-03-29 03:20:07.956892 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:07.956901 | orchestrator | 2026-03-29 03:20:07.956910 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-29 03:20:07.956919 | orchestrator | 2026-03-29 03:20:07.956928 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-29 03:20:07.956937 | orchestrator | Sunday 29 March 2026 03:20:07 +0000 (0:00:00.957) 0:08:48.371 ********** 2026-03-29 03:20:07.956945 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:07.956957 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:07.956967 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:07.956977 | orchestrator | 2026-03-29 03:20:07.956988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:20:07.956999 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:20:07.957013 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-29 03:20:07.957052 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-29 03:20:07.957063 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-29 03:20:07.957073 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 03:20:07.957083 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 03:20:07.957093 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 03:20:07.957103 | orchestrator | 2026-03-29 03:20:07.957113 | orchestrator | 2026-03-29 03:20:07.957131 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:20:07.957141 | orchestrator | Sunday 29 March 2026 03:20:07 +0000 (0:00:00.455) 0:08:48.826 ********** 2026-03-29 03:20:07.957151 | orchestrator | =============================================================================== 2026-03-29 03:20:07.957180 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.57s 2026-03-29 03:20:07.957191 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.83s 2026-03-29 03:20:07.957201 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.36s 2026-03-29 03:20:07.957211 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.47s 2026-03-29 03:20:07.957221 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.12s 2026-03-29 03:20:07.957232 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.22s 2026-03-29 03:20:07.957259 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.64s 2026-03-29 03:20:07.957270 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.98s 2026-03-29 03:20:07.957280 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.82s 2026-03-29 03:20:07.957291 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.45s 2026-03-29 03:20:07.957306 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.91s 2026-03-29 03:20:07.957322 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.58s 2026-03-29 03:20:07.957336 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.98s 2026-03-29 03:20:07.957350 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.88s 2026-03-29 03:20:07.957363 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.27s 2026-03-29 03:20:07.957376 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.23s 2026-03-29 03:20:07.957391 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.37s 2026-03-29 03:20:07.957405 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.36s 2026-03-29 03:20:07.957420 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.20s 2026-03-29 03:20:07.957434 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 7.38s 2026-03-29 03:20:10.528544 | orchestrator | 2026-03-29 03:20:10 | INFO  | Task 472cb157-6aed-415a-a488-bdd9715dac7e (horizon) was prepared for execution. 2026-03-29 03:20:10.528661 | orchestrator | 2026-03-29 03:20:10 | INFO  | It takes a moment until task 472cb157-6aed-415a-a488-bdd9715dac7e (horizon) has been started and output is visible here. 2026-03-29 03:20:17.579697 | orchestrator | 2026-03-29 03:20:17.579810 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:20:17.579823 | orchestrator | 2026-03-29 03:20:17.579831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:20:17.579839 | orchestrator | Sunday 29 March 2026 03:20:14 +0000 (0:00:00.258) 0:00:00.258 ********** 2026-03-29 03:20:17.579846 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:17.579855 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:17.579862 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:17.579869 | orchestrator | 2026-03-29 03:20:17.579877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:20:17.579885 | orchestrator | Sunday 29 March 2026 03:20:14 +0000 (0:00:00.299) 0:00:00.558 ********** 2026-03-29 03:20:17.579893 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-29 03:20:17.579901 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-29 03:20:17.579908 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-29 03:20:17.579915 | orchestrator | 2026-03-29 03:20:17.579923 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-29 03:20:17.579962 | orchestrator | 2026-03-29 03:20:17.579975 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 03:20:17.579987 | orchestrator | Sunday 29 March 2026 03:20:15 +0000 (0:00:00.435) 0:00:00.993 ********** 2026-03-29 03:20:17.579998 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:20:17.580092 | orchestrator | 2026-03-29 03:20:17.580109 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-29 03:20:17.580120 | orchestrator | Sunday 29 March 2026 03:20:15 +0000 (0:00:00.524) 0:00:01.518 ********** 2026-03-29 03:20:17.580149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:17.580180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:17.580206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:17.580220 | orchestrator | 2026-03-29 03:20:17.580236 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-29 03:20:17.580254 | orchestrator | Sunday 29 March 2026 03:20:17 +0000 (0:00:01.136) 0:00:02.654 ********** 2026-03-29 03:20:17.580266 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:17.580279 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:17.580290 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:17.580302 | orchestrator | 2026-03-29 03:20:17.580313 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 03:20:17.580324 | orchestrator | Sunday 29 March 2026 03:20:17 +0000 (0:00:00.452) 0:00:03.107 ********** 2026-03-29 03:20:17.580344 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 03:20:23.751711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 03:20:23.751830 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 03:20:23.751842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 03:20:23.751851 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 03:20:23.751860 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 03:20:23.751868 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-29 03:20:23.751876 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 03:20:23.751884 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 03:20:23.751892 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 03:20:23.751899 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 03:20:23.751907 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 03:20:23.751915 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 03:20:23.751923 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 03:20:23.751931 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-29 03:20:23.751939 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 03:20:23.751946 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 03:20:23.751954 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 03:20:23.751962 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 03:20:23.751969 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 03:20:23.751977 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 03:20:23.751985 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 03:20:23.751993 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-29 03:20:23.752001 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 03:20:23.752081 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-29 03:20:23.752106 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-29 03:20:23.752115 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-29 03:20:23.752123 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-29 03:20:23.752132 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-29 03:20:23.752139 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-29 03:20:23.752147 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-29 03:20:23.752155 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-29 03:20:23.752173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-29 03:20:23.752182 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-29 03:20:23.752190 | orchestrator | 2026-03-29 03:20:23.752199 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752208 | orchestrator | Sunday 29 March 2026 03:20:18 +0000 (0:00:00.724) 0:00:03.831 ********** 2026-03-29 03:20:23.752216 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752224 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752232 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752240 | orchestrator | 2026-03-29 03:20:23.752248 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752257 | orchestrator | Sunday 29 March 2026 03:20:18 +0000 (0:00:00.322) 0:00:04.154 ********** 2026-03-29 03:20:23.752267 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752277 | orchestrator | 2026-03-29 03:20:23.752302 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.752312 | orchestrator | Sunday 29 March 2026 03:20:18 +0000 (0:00:00.315) 0:00:04.469 ********** 2026-03-29 03:20:23.752321 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752331 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.752340 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.752349 | orchestrator | 2026-03-29 03:20:23.752358 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752367 | orchestrator | Sunday 29 March 2026 03:20:19 +0000 (0:00:00.300) 0:00:04.770 ********** 2026-03-29 03:20:23.752377 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752386 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752395 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752404 | orchestrator | 2026-03-29 03:20:23.752413 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752422 | orchestrator | Sunday 29 March 2026 03:20:19 +0000 (0:00:00.365) 0:00:05.135 ********** 2026-03-29 03:20:23.752432 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752441 | orchestrator | 2026-03-29 03:20:23.752450 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.752459 | orchestrator | Sunday 29 March 2026 03:20:19 +0000 (0:00:00.142) 0:00:05.278 ********** 2026-03-29 03:20:23.752469 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752478 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.752487 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.752496 | orchestrator | 2026-03-29 03:20:23.752505 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752514 | orchestrator | Sunday 29 March 2026 03:20:19 +0000 (0:00:00.293) 0:00:05.572 ********** 2026-03-29 03:20:23.752523 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752532 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752540 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752549 | orchestrator | 2026-03-29 03:20:23.752558 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752579 | orchestrator | Sunday 29 March 2026 03:20:20 +0000 (0:00:00.522) 0:00:06.095 ********** 2026-03-29 03:20:23.752598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752607 | orchestrator | 2026-03-29 03:20:23.752616 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.752626 | orchestrator | Sunday 29 March 2026 03:20:20 +0000 (0:00:00.128) 0:00:06.223 ********** 2026-03-29 03:20:23.752635 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752644 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.752652 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.752660 | orchestrator | 2026-03-29 03:20:23.752668 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752683 | orchestrator | Sunday 29 March 2026 03:20:20 +0000 (0:00:00.360) 0:00:06.584 ********** 2026-03-29 03:20:23.752691 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752699 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752707 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752714 | orchestrator | 2026-03-29 03:20:23.752723 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752731 | orchestrator | Sunday 29 March 2026 03:20:21 +0000 (0:00:00.315) 0:00:06.899 ********** 2026-03-29 03:20:23.752738 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752746 | orchestrator | 2026-03-29 03:20:23.752759 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.752767 | orchestrator | Sunday 29 March 2026 03:20:21 +0000 (0:00:00.131) 0:00:07.030 ********** 2026-03-29 03:20:23.752775 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752783 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.752791 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.752799 | orchestrator | 2026-03-29 03:20:23.752807 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752814 | orchestrator | Sunday 29 March 2026 03:20:21 +0000 (0:00:00.492) 0:00:07.523 ********** 2026-03-29 03:20:23.752822 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752830 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752838 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752846 | orchestrator | 2026-03-29 03:20:23.752854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752862 | orchestrator | Sunday 29 March 2026 03:20:22 +0000 (0:00:00.358) 0:00:07.882 ********** 2026-03-29 03:20:23.752869 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752877 | orchestrator | 2026-03-29 03:20:23.752885 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.752893 | orchestrator | Sunday 29 March 2026 03:20:22 +0000 (0:00:00.157) 0:00:08.040 ********** 2026-03-29 03:20:23.752901 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.752909 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.752917 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.752925 | orchestrator | 2026-03-29 03:20:23.752933 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.752940 | orchestrator | Sunday 29 March 2026 03:20:22 +0000 (0:00:00.366) 0:00:08.407 ********** 2026-03-29 03:20:23.752948 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:23.752956 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:23.752964 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:23.752972 | orchestrator | 2026-03-29 03:20:23.752980 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:23.752988 | orchestrator | Sunday 29 March 2026 03:20:23 +0000 (0:00:00.314) 0:00:08.722 ********** 2026-03-29 03:20:23.752996 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.753049 | orchestrator | 2026-03-29 03:20:23.753059 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:23.753067 | orchestrator | Sunday 29 March 2026 03:20:23 +0000 (0:00:00.338) 0:00:09.060 ********** 2026-03-29 03:20:23.753075 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:23.753083 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:23.753091 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:23.753098 | orchestrator | 2026-03-29 03:20:23.753107 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:23.753120 | orchestrator | Sunday 29 March 2026 03:20:23 +0000 (0:00:00.315) 0:00:09.375 ********** 2026-03-29 03:20:37.475295 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:37.475437 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:37.475463 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:37.475480 | orchestrator | 2026-03-29 03:20:37.475497 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:37.475514 | orchestrator | Sunday 29 March 2026 03:20:24 +0000 (0:00:00.324) 0:00:09.700 ********** 2026-03-29 03:20:37.475561 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475579 | orchestrator | 2026-03-29 03:20:37.475596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:37.475613 | orchestrator | Sunday 29 March 2026 03:20:24 +0000 (0:00:00.166) 0:00:09.867 ********** 2026-03-29 03:20:37.475627 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475638 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.475647 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.475657 | orchestrator | 2026-03-29 03:20:37.475667 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:37.475676 | orchestrator | Sunday 29 March 2026 03:20:24 +0000 (0:00:00.297) 0:00:10.165 ********** 2026-03-29 03:20:37.475686 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:37.475696 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:37.475705 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:37.475714 | orchestrator | 2026-03-29 03:20:37.475724 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:37.475733 | orchestrator | Sunday 29 March 2026 03:20:25 +0000 (0:00:00.567) 0:00:10.732 ********** 2026-03-29 03:20:37.475743 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475752 | orchestrator | 2026-03-29 03:20:37.475762 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:37.475772 | orchestrator | Sunday 29 March 2026 03:20:25 +0000 (0:00:00.136) 0:00:10.868 ********** 2026-03-29 03:20:37.475781 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475791 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.475800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.475809 | orchestrator | 2026-03-29 03:20:37.475821 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:37.475832 | orchestrator | Sunday 29 March 2026 03:20:25 +0000 (0:00:00.320) 0:00:11.188 ********** 2026-03-29 03:20:37.475843 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:37.475854 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:37.475864 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:37.475875 | orchestrator | 2026-03-29 03:20:37.475886 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:37.475897 | orchestrator | Sunday 29 March 2026 03:20:25 +0000 (0:00:00.349) 0:00:11.538 ********** 2026-03-29 03:20:37.475908 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475920 | orchestrator | 2026-03-29 03:20:37.475931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:37.475942 | orchestrator | Sunday 29 March 2026 03:20:26 +0000 (0:00:00.142) 0:00:11.681 ********** 2026-03-29 03:20:37.475953 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.475964 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.475975 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.475988 | orchestrator | 2026-03-29 03:20:37.476040 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 03:20:37.476075 | orchestrator | Sunday 29 March 2026 03:20:26 +0000 (0:00:00.561) 0:00:12.243 ********** 2026-03-29 03:20:37.476092 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:20:37.476106 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:20:37.476117 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:20:37.476128 | orchestrator | 2026-03-29 03:20:37.476139 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 03:20:37.476154 | orchestrator | Sunday 29 March 2026 03:20:26 +0000 (0:00:00.341) 0:00:12.584 ********** 2026-03-29 03:20:37.476171 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.476188 | orchestrator | 2026-03-29 03:20:37.476204 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 03:20:37.476221 | orchestrator | Sunday 29 March 2026 03:20:27 +0000 (0:00:00.146) 0:00:12.731 ********** 2026-03-29 03:20:37.476238 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.476254 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.476287 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.476303 | orchestrator | 2026-03-29 03:20:37.476320 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-29 03:20:37.476331 | orchestrator | Sunday 29 March 2026 03:20:27 +0000 (0:00:00.302) 0:00:13.033 ********** 2026-03-29 03:20:37.476340 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:20:37.476350 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:20:37.476360 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:20:37.476369 | orchestrator | 2026-03-29 03:20:37.476379 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-29 03:20:37.476389 | orchestrator | Sunday 29 March 2026 03:20:29 +0000 (0:00:01.952) 0:00:14.986 ********** 2026-03-29 03:20:37.476398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 03:20:37.476409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 03:20:37.476419 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 03:20:37.476428 | orchestrator | 2026-03-29 03:20:37.476438 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-29 03:20:37.476447 | orchestrator | Sunday 29 March 2026 03:20:31 +0000 (0:00:01.885) 0:00:16.872 ********** 2026-03-29 03:20:37.476457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 03:20:37.476468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 03:20:37.476477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 03:20:37.476487 | orchestrator | 2026-03-29 03:20:37.476497 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-29 03:20:37.476526 | orchestrator | Sunday 29 March 2026 03:20:32 +0000 (0:00:01.758) 0:00:18.630 ********** 2026-03-29 03:20:37.476537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 03:20:37.476549 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 03:20:37.476566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 03:20:37.476580 | orchestrator | 2026-03-29 03:20:37.476606 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-29 03:20:37.476621 | orchestrator | Sunday 29 March 2026 03:20:34 +0000 (0:00:01.479) 0:00:20.110 ********** 2026-03-29 03:20:37.476636 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.476652 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.476668 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.476684 | orchestrator | 2026-03-29 03:20:37.476700 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-29 03:20:37.476715 | orchestrator | Sunday 29 March 2026 03:20:34 +0000 (0:00:00.402) 0:00:20.513 ********** 2026-03-29 03:20:37.476725 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:37.476735 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:37.476745 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:37.476754 | orchestrator | 2026-03-29 03:20:37.476764 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 03:20:37.476773 | orchestrator | Sunday 29 March 2026 03:20:35 +0000 (0:00:00.257) 0:00:20.770 ********** 2026-03-29 03:20:37.476783 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:20:37.476792 | orchestrator | 2026-03-29 03:20:37.476805 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-29 03:20:37.476821 | orchestrator | Sunday 29 March 2026 03:20:35 +0000 (0:00:00.573) 0:00:21.343 ********** 2026-03-29 03:20:37.476857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:37.476910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:38.013310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:20:38.013394 | orchestrator | 2026-03-29 03:20:38.013401 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-29 03:20:38.013406 | orchestrator | Sunday 29 March 2026 03:20:37 +0000 (0:00:01.748) 0:00:23.092 ********** 2026-03-29 03:20:38.013424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:38.013447 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:38.013465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:38.013472 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:38.013487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:40.259450 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:40.259554 | orchestrator | 2026-03-29 03:20:40.259561 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-29 03:20:40.259567 | orchestrator | Sunday 29 March 2026 03:20:38 +0000 (0:00:00.541) 0:00:23.634 ********** 2026-03-29 03:20:40.259575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:40.259596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:40.259618 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:20:40.259622 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:20:40.259678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 03:20:40.259689 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:20:40.259693 | orchestrator | 2026-03-29 03:20:40.259697 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-29 03:20:40.259700 | orchestrator | Sunday 29 March 2026 03:20:38 +0000 (0:00:00.798) 0:00:24.433 ********** 2026-03-29 03:20:40.259714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:21:28.165078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:21:28.165259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 03:21:28.165285 | orchestrator | 2026-03-29 03:21:28.165301 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 03:21:28.165318 | orchestrator | Sunday 29 March 2026 03:20:40 +0000 (0:00:01.449) 0:00:25.883 ********** 2026-03-29 03:21:28.165333 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:21:28.165350 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:21:28.165364 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:21:28.165379 | orchestrator | 2026-03-29 03:21:28.165394 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 03:21:28.165403 | orchestrator | Sunday 29 March 2026 03:20:40 +0000 (0:00:00.297) 0:00:26.181 ********** 2026-03-29 03:21:28.165412 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:21:28.165421 | orchestrator | 2026-03-29 03:21:28.165430 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-29 03:21:28.165438 | orchestrator | Sunday 29 March 2026 03:20:41 +0000 (0:00:00.486) 0:00:26.668 ********** 2026-03-29 03:21:28.165447 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:21:28.165466 | orchestrator | 2026-03-29 03:21:28.165475 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-29 03:21:28.165484 | orchestrator | Sunday 29 March 2026 03:20:43 +0000 (0:00:02.341) 0:00:29.009 ********** 2026-03-29 03:21:28.165495 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:21:28.165504 | orchestrator | 2026-03-29 03:21:28.165514 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-29 03:21:28.165525 | orchestrator | Sunday 29 March 2026 03:20:46 +0000 (0:00:02.760) 0:00:31.769 ********** 2026-03-29 03:21:28.165535 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:21:28.165544 | orchestrator | 2026-03-29 03:21:28.165555 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 03:21:28.165565 | orchestrator | Sunday 29 March 2026 03:21:02 +0000 (0:00:16.743) 0:00:48.512 ********** 2026-03-29 03:21:28.165575 | orchestrator | 2026-03-29 03:21:28.165584 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 03:21:28.165594 | orchestrator | Sunday 29 March 2026 03:21:02 +0000 (0:00:00.076) 0:00:48.588 ********** 2026-03-29 03:21:28.165605 | orchestrator | 2026-03-29 03:21:28.165620 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 03:21:28.165635 | orchestrator | Sunday 29 March 2026 03:21:03 +0000 (0:00:00.065) 0:00:48.654 ********** 2026-03-29 03:21:28.165649 | orchestrator | 2026-03-29 03:21:28.165663 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-29 03:21:28.165677 | orchestrator | Sunday 29 March 2026 03:21:03 +0000 (0:00:00.073) 0:00:48.727 ********** 2026-03-29 03:21:28.165690 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:21:28.165705 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:21:28.165719 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:21:28.165734 | orchestrator | 2026-03-29 03:21:28.165750 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:21:28.165767 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 03:21:28.165784 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 03:21:28.165800 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 03:21:28.165815 | orchestrator | 2026-03-29 03:21:28.165829 | orchestrator | 2026-03-29 03:21:28.165845 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:21:28.165861 | orchestrator | Sunday 29 March 2026 03:21:28 +0000 (0:00:25.039) 0:01:13.767 ********** 2026-03-29 03:21:28.165886 | orchestrator | =============================================================================== 2026-03-29 03:21:28.165902 | orchestrator | horizon : Restart horizon container ------------------------------------ 25.04s 2026-03-29 03:21:28.165912 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.74s 2026-03-29 03:21:28.165921 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.76s 2026-03-29 03:21:28.165930 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.34s 2026-03-29 03:21:28.165938 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.95s 2026-03-29 03:21:28.165947 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-03-29 03:21:28.165980 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.76s 2026-03-29 03:21:28.165991 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.75s 2026-03-29 03:21:28.165999 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.48s 2026-03-29 03:21:28.166008 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2026-03-29 03:21:28.166067 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2026-03-29 03:21:28.166086 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-03-29 03:21:28.166095 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-03-29 03:21:28.166113 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-03-29 03:21:28.657221 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-03-29 03:21:28.657310 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-03-29 03:21:28.657320 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.54s 2026-03-29 03:21:28.657328 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-29 03:21:28.657335 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-03-29 03:21:28.657342 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2026-03-29 03:21:30.944064 | orchestrator | 2026-03-29 03:21:30 | INFO  | Task a048ceeb-d4e7-43eb-a386-bb7f681a27d8 (skyline) was prepared for execution. 2026-03-29 03:21:30.944150 | orchestrator | 2026-03-29 03:21:30 | INFO  | It takes a moment until task a048ceeb-d4e7-43eb-a386-bb7f681a27d8 (skyline) has been started and output is visible here. 2026-03-29 03:22:02.884851 | orchestrator | 2026-03-29 03:22:02.884996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:22:02.885012 | orchestrator | 2026-03-29 03:22:02.885020 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:22:02.885027 | orchestrator | Sunday 29 March 2026 03:21:35 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-03-29 03:22:02.885034 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:22:02.885043 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:22:02.885051 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:22:02.885059 | orchestrator | 2026-03-29 03:22:02.885067 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:22:02.885075 | orchestrator | Sunday 29 March 2026 03:21:35 +0000 (0:00:00.302) 0:00:00.573 ********** 2026-03-29 03:22:02.885083 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-29 03:22:02.885091 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-29 03:22:02.885099 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-29 03:22:02.885107 | orchestrator | 2026-03-29 03:22:02.885114 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-29 03:22:02.885122 | orchestrator | 2026-03-29 03:22:02.885130 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-29 03:22:02.885138 | orchestrator | Sunday 29 March 2026 03:21:35 +0000 (0:00:00.447) 0:00:01.021 ********** 2026-03-29 03:22:02.885146 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:22:02.885155 | orchestrator | 2026-03-29 03:22:02.885163 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-29 03:22:02.885171 | orchestrator | Sunday 29 March 2026 03:21:36 +0000 (0:00:00.554) 0:00:01.575 ********** 2026-03-29 03:22:02.885178 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-29 03:22:02.885186 | orchestrator | 2026-03-29 03:22:02.885194 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-29 03:22:02.885201 | orchestrator | Sunday 29 March 2026 03:21:39 +0000 (0:00:03.520) 0:00:05.096 ********** 2026-03-29 03:22:02.885209 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-29 03:22:02.885217 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-29 03:22:02.885225 | orchestrator | 2026-03-29 03:22:02.885233 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-29 03:22:02.885241 | orchestrator | Sunday 29 March 2026 03:21:46 +0000 (0:00:06.946) 0:00:12.043 ********** 2026-03-29 03:22:02.885274 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:22:02.885283 | orchestrator | 2026-03-29 03:22:02.885291 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-29 03:22:02.885299 | orchestrator | Sunday 29 March 2026 03:21:50 +0000 (0:00:03.405) 0:00:15.448 ********** 2026-03-29 03:22:02.885306 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:22:02.885328 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-29 03:22:02.885336 | orchestrator | 2026-03-29 03:22:02.885344 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-29 03:22:02.885352 | orchestrator | Sunday 29 March 2026 03:21:54 +0000 (0:00:04.097) 0:00:19.545 ********** 2026-03-29 03:22:02.885359 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:22:02.885367 | orchestrator | 2026-03-29 03:22:02.885375 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-29 03:22:02.885382 | orchestrator | Sunday 29 March 2026 03:21:57 +0000 (0:00:03.304) 0:00:22.850 ********** 2026-03-29 03:22:02.885390 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-29 03:22:02.885398 | orchestrator | 2026-03-29 03:22:02.885406 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-29 03:22:02.885414 | orchestrator | Sunday 29 March 2026 03:22:01 +0000 (0:00:03.863) 0:00:26.713 ********** 2026-03-29 03:22:02.885425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:02.885453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:02.885462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:02.885482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:02.885490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:02.885504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739260 | orchestrator | 2026-03-29 03:22:06.739340 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-29 03:22:06.739352 | orchestrator | Sunday 29 March 2026 03:22:02 +0000 (0:00:01.350) 0:00:28.064 ********** 2026-03-29 03:22:06.739361 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:22:06.739369 | orchestrator | 2026-03-29 03:22:06.739377 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-29 03:22:06.739384 | orchestrator | Sunday 29 March 2026 03:22:03 +0000 (0:00:00.723) 0:00:28.787 ********** 2026-03-29 03:22:06.739395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:06.739507 | orchestrator | 2026-03-29 03:22:06.739512 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-29 03:22:06.739519 | orchestrator | Sunday 29 March 2026 03:22:06 +0000 (0:00:02.465) 0:00:31.253 ********** 2026-03-29 03:22:06.739526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:06.739534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:06.739542 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:22:06.739556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001285 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:22:08.001336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001381 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:22:08.001400 | orchestrator | 2026-03-29 03:22:08.001421 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-29 03:22:08.001442 | orchestrator | Sunday 29 March 2026 03:22:06 +0000 (0:00:00.668) 0:00:31.922 ********** 2026-03-29 03:22:08.001462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001543 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:22:08.001561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001585 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:22:08.001596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 03:22:08.001624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 03:22:16.390738 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:22:16.390863 | orchestrator | 2026-03-29 03:22:16.390886 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-29 03:22:16.390902 | orchestrator | Sunday 29 March 2026 03:22:07 +0000 (0:00:01.255) 0:00:33.177 ********** 2026-03-29 03:22:16.391007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391164 | orchestrator | 2026-03-29 03:22:16.391179 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-29 03:22:16.391192 | orchestrator | Sunday 29 March 2026 03:22:10 +0000 (0:00:02.438) 0:00:35.616 ********** 2026-03-29 03:22:16.391205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-29 03:22:16.391216 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-29 03:22:16.391233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-29 03:22:16.391241 | orchestrator | 2026-03-29 03:22:16.391250 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-29 03:22:16.391260 | orchestrator | Sunday 29 March 2026 03:22:11 +0000 (0:00:01.526) 0:00:37.142 ********** 2026-03-29 03:22:16.391269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-29 03:22:16.391282 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-29 03:22:16.391295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-29 03:22:16.391309 | orchestrator | 2026-03-29 03:22:16.391323 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-29 03:22:16.391336 | orchestrator | Sunday 29 March 2026 03:22:13 +0000 (0:00:02.033) 0:00:39.176 ********** 2026-03-29 03:22:16.391350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:16.391382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472411 | orchestrator | 2026-03-29 03:22:18.472418 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-29 03:22:18.472426 | orchestrator | Sunday 29 March 2026 03:22:16 +0000 (0:00:02.398) 0:00:41.575 ********** 2026-03-29 03:22:18.472442 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:22:18.472449 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:22:18.472454 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:22:18.472460 | orchestrator | 2026-03-29 03:22:18.472477 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-29 03:22:18.472483 | orchestrator | Sunday 29 March 2026 03:22:16 +0000 (0:00:00.302) 0:00:41.877 ********** 2026-03-29 03:22:18.472489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:18.472530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:51.118628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 03:22:51.118750 | orchestrator | 2026-03-29 03:22:51.118769 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-29 03:22:51.118783 | orchestrator | Sunday 29 March 2026 03:22:18 +0000 (0:00:01.777) 0:00:43.654 ********** 2026-03-29 03:22:51.118794 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:22:51.118806 | orchestrator | 2026-03-29 03:22:51.118818 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-29 03:22:51.118829 | orchestrator | Sunday 29 March 2026 03:22:20 +0000 (0:00:02.275) 0:00:45.929 ********** 2026-03-29 03:22:51.118840 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:22:51.118850 | orchestrator | 2026-03-29 03:22:51.118862 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-29 03:22:51.118874 | orchestrator | Sunday 29 March 2026 03:22:23 +0000 (0:00:02.301) 0:00:48.231 ********** 2026-03-29 03:22:51.118884 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:22:51.118895 | orchestrator | 2026-03-29 03:22:51.118978 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-29 03:22:51.118990 | orchestrator | Sunday 29 March 2026 03:22:30 +0000 (0:00:07.450) 0:00:55.682 ********** 2026-03-29 03:22:51.119001 | orchestrator | 2026-03-29 03:22:51.119012 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-29 03:22:51.119023 | orchestrator | Sunday 29 March 2026 03:22:30 +0000 (0:00:00.069) 0:00:55.751 ********** 2026-03-29 03:22:51.119034 | orchestrator | 2026-03-29 03:22:51.119045 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-29 03:22:51.119056 | orchestrator | Sunday 29 March 2026 03:22:30 +0000 (0:00:00.067) 0:00:55.818 ********** 2026-03-29 03:22:51.119072 | orchestrator | 2026-03-29 03:22:51.119091 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-29 03:22:51.119110 | orchestrator | Sunday 29 March 2026 03:22:30 +0000 (0:00:00.069) 0:00:55.888 ********** 2026-03-29 03:22:51.119128 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:22:51.119147 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:22:51.119165 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:22:51.119184 | orchestrator | 2026-03-29 03:22:51.119200 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-29 03:22:51.119217 | orchestrator | Sunday 29 March 2026 03:22:37 +0000 (0:00:06.351) 0:01:02.239 ********** 2026-03-29 03:22:51.119234 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:22:51.119253 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:22:51.119271 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:22:51.119290 | orchestrator | 2026-03-29 03:22:51.119308 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:22:51.119331 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 03:22:51.119351 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 03:22:51.119404 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 03:22:51.119425 | orchestrator | 2026-03-29 03:22:51.119442 | orchestrator | 2026-03-29 03:22:51.119479 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:22:51.119502 | orchestrator | Sunday 29 March 2026 03:22:50 +0000 (0:00:13.720) 0:01:15.959 ********** 2026-03-29 03:22:51.119519 | orchestrator | =============================================================================== 2026-03-29 03:22:51.119537 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.72s 2026-03-29 03:22:51.119553 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.45s 2026-03-29 03:22:51.119571 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.95s 2026-03-29 03:22:51.119589 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.35s 2026-03-29 03:22:51.119607 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.10s 2026-03-29 03:22:51.119625 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.86s 2026-03-29 03:22:51.119642 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.52s 2026-03-29 03:22:51.119660 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.41s 2026-03-29 03:22:51.119705 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.30s 2026-03-29 03:22:51.119724 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.47s 2026-03-29 03:22:51.119742 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.44s 2026-03-29 03:22:51.119760 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.40s 2026-03-29 03:22:51.119776 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.30s 2026-03-29 03:22:51.119792 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.28s 2026-03-29 03:22:51.119809 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.03s 2026-03-29 03:22:51.119827 | orchestrator | skyline : Check skyline container --------------------------------------- 1.78s 2026-03-29 03:22:51.119843 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.53s 2026-03-29 03:22:51.119861 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.35s 2026-03-29 03:22:51.119879 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.26s 2026-03-29 03:22:51.119898 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.72s 2026-03-29 03:22:53.495139 | orchestrator | 2026-03-29 03:22:53 | INFO  | Task d3fb4b00-8578-4b6f-a860-d325c5b8054a (glance) was prepared for execution. 2026-03-29 03:22:53.495242 | orchestrator | 2026-03-29 03:22:53 | INFO  | It takes a moment until task d3fb4b00-8578-4b6f-a860-d325c5b8054a (glance) has been started and output is visible here. 2026-03-29 03:23:27.973722 | orchestrator | 2026-03-29 03:23:27.973817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:23:27.973826 | orchestrator | 2026-03-29 03:23:27.973830 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:23:27.973835 | orchestrator | Sunday 29 March 2026 03:22:57 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-03-29 03:23:27.973839 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:23:27.973845 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:23:27.973849 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:23:27.973852 | orchestrator | 2026-03-29 03:23:27.973856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:23:27.973861 | orchestrator | Sunday 29 March 2026 03:22:58 +0000 (0:00:00.309) 0:00:00.579 ********** 2026-03-29 03:23:27.973943 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-29 03:23:27.973953 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-29 03:23:27.973960 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-29 03:23:27.973973 | orchestrator | 2026-03-29 03:23:27.973977 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-29 03:23:27.973981 | orchestrator | 2026-03-29 03:23:27.973985 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 03:23:27.973989 | orchestrator | Sunday 29 March 2026 03:22:58 +0000 (0:00:00.446) 0:00:01.026 ********** 2026-03-29 03:23:27.973992 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:23:27.973997 | orchestrator | 2026-03-29 03:23:27.974001 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-29 03:23:27.974005 | orchestrator | Sunday 29 March 2026 03:22:59 +0000 (0:00:00.591) 0:00:01.617 ********** 2026-03-29 03:23:27.974008 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-29 03:23:27.974012 | orchestrator | 2026-03-29 03:23:27.974047 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-29 03:23:27.974051 | orchestrator | Sunday 29 March 2026 03:23:02 +0000 (0:00:03.487) 0:00:05.105 ********** 2026-03-29 03:23:27.974055 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-29 03:23:27.974060 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-29 03:23:27.974064 | orchestrator | 2026-03-29 03:23:27.974068 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-29 03:23:27.974072 | orchestrator | Sunday 29 March 2026 03:23:09 +0000 (0:00:06.599) 0:00:11.705 ********** 2026-03-29 03:23:27.974076 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:23:27.974081 | orchestrator | 2026-03-29 03:23:27.974085 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-29 03:23:27.974089 | orchestrator | Sunday 29 March 2026 03:23:12 +0000 (0:00:03.333) 0:00:15.038 ********** 2026-03-29 03:23:27.974103 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:23:27.974107 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-29 03:23:27.974111 | orchestrator | 2026-03-29 03:23:27.974120 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-29 03:23:27.974123 | orchestrator | Sunday 29 March 2026 03:23:16 +0000 (0:00:04.119) 0:00:19.158 ********** 2026-03-29 03:23:27.974127 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:23:27.974131 | orchestrator | 2026-03-29 03:23:27.974135 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-29 03:23:27.974139 | orchestrator | Sunday 29 March 2026 03:23:19 +0000 (0:00:03.329) 0:00:22.487 ********** 2026-03-29 03:23:27.974143 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-29 03:23:27.974153 | orchestrator | 2026-03-29 03:23:27.974157 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-29 03:23:27.974161 | orchestrator | Sunday 29 March 2026 03:23:23 +0000 (0:00:03.839) 0:00:26.326 ********** 2026-03-29 03:23:27.974182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:27.974199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:27.974204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:27.974213 | orchestrator | 2026-03-29 03:23:27.974217 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 03:23:27.974220 | orchestrator | Sunday 29 March 2026 03:23:27 +0000 (0:00:03.480) 0:00:29.807 ********** 2026-03-29 03:23:27.974225 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:23:27.974229 | orchestrator | 2026-03-29 03:23:27.974235 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-29 03:23:43.303576 | orchestrator | Sunday 29 March 2026 03:23:27 +0000 (0:00:00.727) 0:00:30.535 ********** 2026-03-29 03:23:43.303656 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:23:43.303663 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:23:43.303668 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:23:43.303672 | orchestrator | 2026-03-29 03:23:43.303676 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-29 03:23:43.303681 | orchestrator | Sunday 29 March 2026 03:23:31 +0000 (0:00:03.522) 0:00:34.057 ********** 2026-03-29 03:23:43.303686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303695 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303698 | orchestrator | 2026-03-29 03:23:43.303702 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-29 03:23:43.303706 | orchestrator | Sunday 29 March 2026 03:23:33 +0000 (0:00:01.585) 0:00:35.642 ********** 2026-03-29 03:23:43.303710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:23:43.303721 | orchestrator | 2026-03-29 03:23:43.303725 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-29 03:23:43.303729 | orchestrator | Sunday 29 March 2026 03:23:34 +0000 (0:00:01.442) 0:00:37.084 ********** 2026-03-29 03:23:43.303733 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:23:43.303738 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:23:43.303742 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:23:43.303746 | orchestrator | 2026-03-29 03:23:43.303750 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-29 03:23:43.303753 | orchestrator | Sunday 29 March 2026 03:23:35 +0000 (0:00:00.713) 0:00:37.798 ********** 2026-03-29 03:23:43.303757 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:23:43.303761 | orchestrator | 2026-03-29 03:23:43.303765 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-29 03:23:43.303769 | orchestrator | Sunday 29 March 2026 03:23:35 +0000 (0:00:00.160) 0:00:37.958 ********** 2026-03-29 03:23:43.303772 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:23:43.303776 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:23:43.303780 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:23:43.303784 | orchestrator | 2026-03-29 03:23:43.303798 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 03:23:43.303802 | orchestrator | Sunday 29 March 2026 03:23:35 +0000 (0:00:00.303) 0:00:38.261 ********** 2026-03-29 03:23:43.303806 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:23:43.303824 | orchestrator | 2026-03-29 03:23:43.303828 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-29 03:23:43.303832 | orchestrator | Sunday 29 March 2026 03:23:36 +0000 (0:00:00.780) 0:00:39.042 ********** 2026-03-29 03:23:43.303839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:43.303858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:43.303867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:23:43.303906 | orchestrator | 2026-03-29 03:23:43.303912 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-29 03:23:43.303915 | orchestrator | Sunday 29 March 2026 03:23:40 +0000 (0:00:03.871) 0:00:42.913 ********** 2026-03-29 03:23:43.303924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:23:46.729337 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:23:46.729443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:23:46.729472 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:23:46.729478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:23:46.729482 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:23:46.729486 | orchestrator | 2026-03-29 03:23:46.729491 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-29 03:23:46.729496 | orchestrator | Sunday 29 March 2026 03:23:43 +0000 (0:00:02.952) 0:00:45.866 ********** 2026-03-29 03:23:46.729516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:23:46.729526 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:23:46.729530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:23:46.729534 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:23:46.729543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 03:24:20.144105 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144186 | orchestrator | 2026-03-29 03:24:20.144193 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-29 03:24:20.144210 | orchestrator | Sunday 29 March 2026 03:23:46 +0000 (0:00:03.423) 0:00:49.290 ********** 2026-03-29 03:24:20.144214 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144218 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144222 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144226 | orchestrator | 2026-03-29 03:24:20.144230 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-29 03:24:20.144234 | orchestrator | Sunday 29 March 2026 03:23:49 +0000 (0:00:03.166) 0:00:52.456 ********** 2026-03-29 03:24:20.144240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:24:20.144247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:24:20.144281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:24:20.144287 | orchestrator | 2026-03-29 03:24:20.144291 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-29 03:24:20.144303 | orchestrator | Sunday 29 March 2026 03:23:53 +0000 (0:00:03.972) 0:00:56.429 ********** 2026-03-29 03:24:20.144307 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:24:20.144310 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:24:20.144314 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:24:20.144318 | orchestrator | 2026-03-29 03:24:20.144323 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-29 03:24:20.144329 | orchestrator | Sunday 29 March 2026 03:23:59 +0000 (0:00:05.502) 0:01:01.931 ********** 2026-03-29 03:24:20.144336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144346 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144353 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144359 | orchestrator | 2026-03-29 03:24:20.144365 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-29 03:24:20.144371 | orchestrator | Sunday 29 March 2026 03:24:02 +0000 (0:00:03.374) 0:01:05.306 ********** 2026-03-29 03:24:20.144378 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144383 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144389 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144395 | orchestrator | 2026-03-29 03:24:20.144401 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-29 03:24:20.144406 | orchestrator | Sunday 29 March 2026 03:24:05 +0000 (0:00:03.187) 0:01:08.493 ********** 2026-03-29 03:24:20.144412 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144418 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144423 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144429 | orchestrator | 2026-03-29 03:24:20.144435 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-29 03:24:20.144448 | orchestrator | Sunday 29 March 2026 03:24:09 +0000 (0:00:03.235) 0:01:11.729 ********** 2026-03-29 03:24:20.144454 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144461 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144467 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144473 | orchestrator | 2026-03-29 03:24:20.144480 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-29 03:24:20.144486 | orchestrator | Sunday 29 March 2026 03:24:12 +0000 (0:00:03.348) 0:01:15.077 ********** 2026-03-29 03:24:20.144492 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144498 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144510 | orchestrator | 2026-03-29 03:24:20.144516 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-29 03:24:20.144523 | orchestrator | Sunday 29 March 2026 03:24:13 +0000 (0:00:00.542) 0:01:15.619 ********** 2026-03-29 03:24:20.144530 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 03:24:20.144539 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:24:20.144543 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 03:24:20.144547 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:24:20.144551 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 03:24:20.144555 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:24:20.144558 | orchestrator | 2026-03-29 03:24:20.144562 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-29 03:24:20.144566 | orchestrator | Sunday 29 March 2026 03:24:16 +0000 (0:00:03.067) 0:01:18.687 ********** 2026-03-29 03:24:20.144569 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:24:20.144573 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:24:20.144577 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:24:20.144580 | orchestrator | 2026-03-29 03:24:20.144584 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-29 03:24:20.144592 | orchestrator | Sunday 29 March 2026 03:24:20 +0000 (0:00:04.011) 0:01:22.698 ********** 2026-03-29 03:25:29.900965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:25:29.901109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:25:29.901158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 03:25:29.901166 | orchestrator | 2026-03-29 03:25:29.901174 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 03:25:29.901181 | orchestrator | Sunday 29 March 2026 03:24:23 +0000 (0:00:03.726) 0:01:26.424 ********** 2026-03-29 03:25:29.901187 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:25:29.901194 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:25:29.901199 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:25:29.901205 | orchestrator | 2026-03-29 03:25:29.901211 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-29 03:25:29.901216 | orchestrator | Sunday 29 March 2026 03:24:24 +0000 (0:00:00.498) 0:01:26.923 ********** 2026-03-29 03:25:29.901228 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901235 | orchestrator | 2026-03-29 03:25:29.901240 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-29 03:25:29.901246 | orchestrator | Sunday 29 March 2026 03:24:26 +0000 (0:00:02.190) 0:01:29.113 ********** 2026-03-29 03:25:29.901252 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901258 | orchestrator | 2026-03-29 03:25:29.901263 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-29 03:25:29.901268 | orchestrator | Sunday 29 March 2026 03:24:28 +0000 (0:00:02.267) 0:01:31.381 ********** 2026-03-29 03:25:29.901274 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901279 | orchestrator | 2026-03-29 03:25:29.901285 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-29 03:25:29.901290 | orchestrator | Sunday 29 March 2026 03:24:31 +0000 (0:00:02.291) 0:01:33.672 ********** 2026-03-29 03:25:29.901295 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901300 | orchestrator | 2026-03-29 03:25:29.901306 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-29 03:25:29.901311 | orchestrator | Sunday 29 March 2026 03:25:00 +0000 (0:00:29.162) 0:02:02.834 ********** 2026-03-29 03:25:29.901317 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901322 | orchestrator | 2026-03-29 03:25:29.901328 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 03:25:29.901334 | orchestrator | Sunday 29 March 2026 03:25:02 +0000 (0:00:02.265) 0:02:05.099 ********** 2026-03-29 03:25:29.901341 | orchestrator | 2026-03-29 03:25:29.901345 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 03:25:29.901349 | orchestrator | Sunday 29 March 2026 03:25:02 +0000 (0:00:00.079) 0:02:05.179 ********** 2026-03-29 03:25:29.901352 | orchestrator | 2026-03-29 03:25:29.901356 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 03:25:29.901360 | orchestrator | Sunday 29 March 2026 03:25:02 +0000 (0:00:00.086) 0:02:05.266 ********** 2026-03-29 03:25:29.901363 | orchestrator | 2026-03-29 03:25:29.901367 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-29 03:25:29.901371 | orchestrator | Sunday 29 March 2026 03:25:02 +0000 (0:00:00.070) 0:02:05.336 ********** 2026-03-29 03:25:29.901374 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:25:29.901378 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:25:29.901382 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:25:29.901386 | orchestrator | 2026-03-29 03:25:29.901389 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:25:29.901394 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:25:29.901399 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 03:25:29.901403 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 03:25:29.901407 | orchestrator | 2026-03-29 03:25:29.901410 | orchestrator | 2026-03-29 03:25:29.901414 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:25:29.901418 | orchestrator | Sunday 29 March 2026 03:25:29 +0000 (0:00:27.113) 0:02:32.449 ********** 2026-03-29 03:25:29.901422 | orchestrator | =============================================================================== 2026-03-29 03:25:29.901425 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.16s 2026-03-29 03:25:29.901429 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.11s 2026-03-29 03:25:29.901433 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.60s 2026-03-29 03:25:29.901442 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.50s 2026-03-29 03:25:30.223288 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.12s 2026-03-29 03:25:30.223395 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.01s 2026-03-29 03:25:30.223404 | orchestrator | glance : Copying over config.json files for services -------------------- 3.97s 2026-03-29 03:25:30.223408 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.87s 2026-03-29 03:25:30.223412 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.84s 2026-03-29 03:25:30.223416 | orchestrator | glance : Check glance containers ---------------------------------------- 3.73s 2026-03-29 03:25:30.223420 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.52s 2026-03-29 03:25:30.223424 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.49s 2026-03-29 03:25:30.223428 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.48s 2026-03-29 03:25:30.223432 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.42s 2026-03-29 03:25:30.223436 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.37s 2026-03-29 03:25:30.223439 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.35s 2026-03-29 03:25:30.223443 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.33s 2026-03-29 03:25:30.223447 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.33s 2026-03-29 03:25:30.223451 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.24s 2026-03-29 03:25:30.223455 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.19s 2026-03-29 03:25:32.528133 | orchestrator | 2026-03-29 03:25:32 | INFO  | Task 0f7e7c8e-5854-43f7-8eed-89fa5d33dd6c (cinder) was prepared for execution. 2026-03-29 03:25:32.528228 | orchestrator | 2026-03-29 03:25:32 | INFO  | It takes a moment until task 0f7e7c8e-5854-43f7-8eed-89fa5d33dd6c (cinder) has been started and output is visible here. 2026-03-29 03:26:09.619562 | orchestrator | 2026-03-29 03:26:09.619716 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:26:09.619735 | orchestrator | 2026-03-29 03:26:09.619748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:26:09.619760 | orchestrator | Sunday 29 March 2026 03:25:36 +0000 (0:00:00.253) 0:00:00.253 ********** 2026-03-29 03:26:09.619773 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:26:09.619786 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:26:09.619867 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:26:09.619882 | orchestrator | 2026-03-29 03:26:09.619895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:26:09.619907 | orchestrator | Sunday 29 March 2026 03:25:36 +0000 (0:00:00.282) 0:00:00.535 ********** 2026-03-29 03:26:09.619919 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-29 03:26:09.619931 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-29 03:26:09.619943 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-29 03:26:09.619954 | orchestrator | 2026-03-29 03:26:09.619966 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-29 03:26:09.619978 | orchestrator | 2026-03-29 03:26:09.619991 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 03:26:09.620004 | orchestrator | Sunday 29 March 2026 03:25:37 +0000 (0:00:00.323) 0:00:00.858 ********** 2026-03-29 03:26:09.620018 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:26:09.620033 | orchestrator | 2026-03-29 03:26:09.620046 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-29 03:26:09.620059 | orchestrator | Sunday 29 March 2026 03:25:37 +0000 (0:00:00.473) 0:00:01.331 ********** 2026-03-29 03:26:09.620073 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-29 03:26:09.620086 | orchestrator | 2026-03-29 03:26:09.620131 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-29 03:26:09.620146 | orchestrator | Sunday 29 March 2026 03:25:41 +0000 (0:00:03.796) 0:00:05.128 ********** 2026-03-29 03:26:09.620162 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-29 03:26:09.620177 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-29 03:26:09.620192 | orchestrator | 2026-03-29 03:26:09.620208 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-29 03:26:09.620223 | orchestrator | Sunday 29 March 2026 03:25:48 +0000 (0:00:06.837) 0:00:11.965 ********** 2026-03-29 03:26:09.620239 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:26:09.620254 | orchestrator | 2026-03-29 03:26:09.620269 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-29 03:26:09.620285 | orchestrator | Sunday 29 March 2026 03:25:52 +0000 (0:00:03.761) 0:00:15.726 ********** 2026-03-29 03:26:09.620299 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:26:09.620314 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-29 03:26:09.620330 | orchestrator | 2026-03-29 03:26:09.620343 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-29 03:26:09.620356 | orchestrator | Sunday 29 March 2026 03:25:56 +0000 (0:00:04.308) 0:00:20.035 ********** 2026-03-29 03:26:09.620370 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:26:09.620384 | orchestrator | 2026-03-29 03:26:09.620397 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-29 03:26:09.620426 | orchestrator | Sunday 29 March 2026 03:25:59 +0000 (0:00:03.311) 0:00:23.346 ********** 2026-03-29 03:26:09.620440 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-29 03:26:09.620454 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-29 03:26:09.620467 | orchestrator | 2026-03-29 03:26:09.620480 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-29 03:26:09.620492 | orchestrator | Sunday 29 March 2026 03:26:07 +0000 (0:00:07.867) 0:00:31.214 ********** 2026-03-29 03:26:09.620509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:09.620549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:09.620575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:09.620590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:09.620611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:09.620625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:09.620638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:09.620660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:15.430652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:15.430764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:15.430791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:15.430845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:15.430852 | orchestrator | 2026-03-29 03:26:15.430860 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 03:26:15.430869 | orchestrator | Sunday 29 March 2026 03:26:09 +0000 (0:00:02.108) 0:00:33.322 ********** 2026-03-29 03:26:15.430876 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:15.430885 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:15.430891 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:15.430898 | orchestrator | 2026-03-29 03:26:15.430904 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 03:26:15.430911 | orchestrator | Sunday 29 March 2026 03:26:10 +0000 (0:00:00.519) 0:00:33.842 ********** 2026-03-29 03:26:15.430919 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:26:15.430948 | orchestrator | 2026-03-29 03:26:15.430955 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-29 03:26:15.430962 | orchestrator | Sunday 29 March 2026 03:26:10 +0000 (0:00:00.563) 0:00:34.405 ********** 2026-03-29 03:26:15.430968 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-29 03:26:15.430975 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-29 03:26:15.430982 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-29 03:26:15.430988 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-29 03:26:15.430994 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-29 03:26:15.431000 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-29 03:26:15.431007 | orchestrator | 2026-03-29 03:26:15.431013 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-29 03:26:15.431020 | orchestrator | Sunday 29 March 2026 03:26:12 +0000 (0:00:01.624) 0:00:36.030 ********** 2026-03-29 03:26:15.431041 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:15.431050 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:15.431062 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:15.431068 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:15.431086 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:26.331610 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 03:26:26.331744 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331780 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331851 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331891 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331925 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331938 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 03:26:26.331950 | orchestrator | 2026-03-29 03:26:26.331964 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-29 03:26:26.331976 | orchestrator | Sunday 29 March 2026 03:26:15 +0000 (0:00:03.340) 0:00:39.370 ********** 2026-03-29 03:26:26.331988 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:26:26.332000 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:26:26.332010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 03:26:26.332021 | orchestrator | 2026-03-29 03:26:26.332039 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-29 03:26:26.332051 | orchestrator | Sunday 29 March 2026 03:26:17 +0000 (0:00:01.493) 0:00:40.864 ********** 2026-03-29 03:26:26.332065 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-29 03:26:26.332077 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-29 03:26:26.332090 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-29 03:26:26.332102 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 03:26:26.332125 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 03:26:26.332138 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 03:26:26.332151 | orchestrator | 2026-03-29 03:26:26.332163 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-29 03:26:26.332175 | orchestrator | Sunday 29 March 2026 03:26:19 +0000 (0:00:02.739) 0:00:43.603 ********** 2026-03-29 03:26:26.332188 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-29 03:26:26.332202 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-29 03:26:26.332214 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-29 03:26:26.332227 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-29 03:26:26.332239 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-29 03:26:26.332252 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-29 03:26:26.332265 | orchestrator | 2026-03-29 03:26:26.332277 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-29 03:26:26.332290 | orchestrator | Sunday 29 March 2026 03:26:21 +0000 (0:00:01.069) 0:00:44.672 ********** 2026-03-29 03:26:26.332303 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:26.332316 | orchestrator | 2026-03-29 03:26:26.332329 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-29 03:26:26.332341 | orchestrator | Sunday 29 March 2026 03:26:21 +0000 (0:00:00.135) 0:00:44.808 ********** 2026-03-29 03:26:26.332354 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:26.332366 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:26.332378 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:26.332391 | orchestrator | 2026-03-29 03:26:26.332404 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 03:26:26.332416 | orchestrator | Sunday 29 March 2026 03:26:21 +0000 (0:00:00.514) 0:00:45.322 ********** 2026-03-29 03:26:26.332427 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:26:26.332438 | orchestrator | 2026-03-29 03:26:26.332449 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-29 03:26:26.332460 | orchestrator | Sunday 29 March 2026 03:26:22 +0000 (0:00:00.571) 0:00:45.893 ********** 2026-03-29 03:26:26.332480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:27.051481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:27.051634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:27.051655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:27.051776 | orchestrator | 2026-03-29 03:26:27.051787 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-29 03:26:27.051893 | orchestrator | Sunday 29 March 2026 03:26:26 +0000 (0:00:04.142) 0:00:50.036 ********** 2026-03-29 03:26:27.051913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.150116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150288 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:27.150302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.150315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150410 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:27.150424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.150437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.150486 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:27.150499 | orchestrator | 2026-03-29 03:26:27.150514 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-29 03:26:27.150535 | orchestrator | Sunday 29 March 2026 03:26:27 +0000 (0:00:00.735) 0:00:50.771 ********** 2026-03-29 03:26:27.650729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.650939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.650973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.650994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.651054 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:27.651078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.651141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.651163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.651185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.651205 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:27.651230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:27.651271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:27.651305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:32.278667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:32.278785 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:32.278830 | orchestrator | 2026-03-29 03:26:32.278843 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-29 03:26:32.278855 | orchestrator | Sunday 29 March 2026 03:26:27 +0000 (0:00:00.779) 0:00:51.551 ********** 2026-03-29 03:26:32.278867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:32.278880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:32.278913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:32.278943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.278962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.278973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.278984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.278995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.279013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:32.279042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.438730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.438922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.438954 | orchestrator | 2026-03-29 03:26:44.438972 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-29 03:26:44.438988 | orchestrator | Sunday 29 March 2026 03:26:32 +0000 (0:00:04.422) 0:00:55.974 ********** 2026-03-29 03:26:44.439002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 03:26:44.439018 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 03:26:44.439033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 03:26:44.439077 | orchestrator | 2026-03-29 03:26:44.439091 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-29 03:26:44.439100 | orchestrator | Sunday 29 March 2026 03:26:34 +0000 (0:00:01.665) 0:00:57.639 ********** 2026-03-29 03:26:44.439110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:44.439121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:44.439156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:44.439168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.439178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.439194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.439203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.439213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:44.439232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:46.901265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:46.901377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:46.901410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:46.901420 | orchestrator | 2026-03-29 03:26:46.901430 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-29 03:26:46.901438 | orchestrator | Sunday 29 March 2026 03:26:44 +0000 (0:00:10.506) 0:01:08.146 ********** 2026-03-29 03:26:46.901446 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:26:46.901454 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:26:46.901462 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:26:46.901469 | orchestrator | 2026-03-29 03:26:46.901476 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-29 03:26:46.901484 | orchestrator | Sunday 29 March 2026 03:26:46 +0000 (0:00:01.574) 0:01:09.720 ********** 2026-03-29 03:26:46.901492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:46.901514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:46.901538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:46.901554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:46.901562 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:46.901570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:46.901577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:46.901589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:46.901603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:50.503453 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:50.503537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 03:26:50.503550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:26:50.503559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 03:26:50.503566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 03:26:50.503573 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:50.503580 | orchestrator | 2026-03-29 03:26:50.503587 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-29 03:26:50.503595 | orchestrator | Sunday 29 March 2026 03:26:46 +0000 (0:00:00.886) 0:01:10.606 ********** 2026-03-29 03:26:50.503602 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:26:50.503608 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:26:50.503615 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:26:50.503622 | orchestrator | 2026-03-29 03:26:50.503644 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-29 03:26:50.503652 | orchestrator | Sunday 29 March 2026 03:26:47 +0000 (0:00:00.571) 0:01:11.178 ********** 2026-03-29 03:26:50.503674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:50.503695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:50.503700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 03:26:50.503705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:50.503710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:50.503717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:26:50.503731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 03:28:20.374389 | orchestrator | 2026-03-29 03:28:20.374398 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 03:28:20.374407 | orchestrator | Sunday 29 March 2026 03:26:50 +0000 (0:00:03.036) 0:01:14.215 ********** 2026-03-29 03:28:20.374415 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:28:20.374422 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:28:20.374429 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:28:20.374437 | orchestrator | 2026-03-29 03:28:20.374444 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-29 03:28:20.374450 | orchestrator | Sunday 29 March 2026 03:26:50 +0000 (0:00:00.302) 0:01:14.517 ********** 2026-03-29 03:28:20.374457 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374463 | orchestrator | 2026-03-29 03:28:20.374482 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-29 03:28:20.374489 | orchestrator | Sunday 29 March 2026 03:26:53 +0000 (0:00:02.135) 0:01:16.653 ********** 2026-03-29 03:28:20.374496 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374502 | orchestrator | 2026-03-29 03:28:20.374508 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-29 03:28:20.374515 | orchestrator | Sunday 29 March 2026 03:26:55 +0000 (0:00:02.394) 0:01:19.047 ********** 2026-03-29 03:28:20.374522 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374528 | orchestrator | 2026-03-29 03:28:20.374535 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 03:28:20.374542 | orchestrator | Sunday 29 March 2026 03:27:15 +0000 (0:00:19.798) 0:01:38.845 ********** 2026-03-29 03:28:20.374549 | orchestrator | 2026-03-29 03:28:20.374556 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 03:28:20.374563 | orchestrator | Sunday 29 March 2026 03:27:15 +0000 (0:00:00.069) 0:01:38.915 ********** 2026-03-29 03:28:20.374571 | orchestrator | 2026-03-29 03:28:20.374577 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 03:28:20.374583 | orchestrator | Sunday 29 March 2026 03:27:15 +0000 (0:00:00.069) 0:01:38.985 ********** 2026-03-29 03:28:20.374589 | orchestrator | 2026-03-29 03:28:20.374595 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-29 03:28:20.374602 | orchestrator | Sunday 29 March 2026 03:27:15 +0000 (0:00:00.089) 0:01:39.074 ********** 2026-03-29 03:28:20.374609 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374615 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:28:20.374622 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:28:20.374629 | orchestrator | 2026-03-29 03:28:20.374636 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-29 03:28:20.374642 | orchestrator | Sunday 29 March 2026 03:27:37 +0000 (0:00:22.534) 0:02:01.609 ********** 2026-03-29 03:28:20.374649 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374656 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:28:20.374663 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:28:20.374670 | orchestrator | 2026-03-29 03:28:20.374677 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-29 03:28:20.374683 | orchestrator | Sunday 29 March 2026 03:27:43 +0000 (0:00:05.183) 0:02:06.792 ********** 2026-03-29 03:28:20.374694 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374698 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:28:20.374703 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:28:20.374707 | orchestrator | 2026-03-29 03:28:20.374711 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-29 03:28:20.374715 | orchestrator | Sunday 29 March 2026 03:28:09 +0000 (0:00:26.375) 0:02:33.168 ********** 2026-03-29 03:28:20.374719 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:28:20.374724 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:28:20.374728 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:28:20.374732 | orchestrator | 2026-03-29 03:28:20.374736 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-29 03:28:20.374763 | orchestrator | Sunday 29 March 2026 03:28:20 +0000 (0:00:10.553) 0:02:43.721 ********** 2026-03-29 03:28:20.374769 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:28:20.374773 | orchestrator | 2026-03-29 03:28:20.374778 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:28:20.374784 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 03:28:20.374791 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:28:20.374796 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:28:20.374801 | orchestrator | 2026-03-29 03:28:20.374806 | orchestrator | 2026-03-29 03:28:20.374815 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:28:20.374821 | orchestrator | Sunday 29 March 2026 03:28:20 +0000 (0:00:00.259) 0:02:43.981 ********** 2026-03-29 03:28:20.374825 | orchestrator | =============================================================================== 2026-03-29 03:28:20.374830 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.38s 2026-03-29 03:28:20.374835 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.53s 2026-03-29 03:28:20.374840 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.80s 2026-03-29 03:28:20.374845 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.55s 2026-03-29 03:28:20.374849 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.51s 2026-03-29 03:28:20.374854 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.87s 2026-03-29 03:28:20.374859 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.84s 2026-03-29 03:28:20.374864 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.18s 2026-03-29 03:28:20.374869 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.42s 2026-03-29 03:28:20.374874 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.31s 2026-03-29 03:28:20.374879 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.14s 2026-03-29 03:28:20.374883 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.80s 2026-03-29 03:28:20.374888 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.76s 2026-03-29 03:28:20.374893 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.34s 2026-03-29 03:28:20.374903 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.31s 2026-03-29 03:28:20.761895 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.04s 2026-03-29 03:28:20.761986 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.74s 2026-03-29 03:28:20.761996 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.39s 2026-03-29 03:28:20.762080 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.14s 2026-03-29 03:28:20.762090 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.11s 2026-03-29 03:28:23.112832 | orchestrator | 2026-03-29 03:28:23 | INFO  | Task 02feef16-0c90-4420-b3be-ba550c751943 (barbican) was prepared for execution. 2026-03-29 03:28:23.112911 | orchestrator | 2026-03-29 03:28:23 | INFO  | It takes a moment until task 02feef16-0c90-4420-b3be-ba550c751943 (barbican) has been started and output is visible here. 2026-03-29 03:29:08.887982 | orchestrator | 2026-03-29 03:29:08.888063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:29:08.888072 | orchestrator | 2026-03-29 03:29:08.888077 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:29:08.888082 | orchestrator | Sunday 29 March 2026 03:28:27 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-29 03:29:08.888087 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:29:08.888092 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:29:08.888096 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:29:08.888100 | orchestrator | 2026-03-29 03:29:08.888104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:29:08.888108 | orchestrator | Sunday 29 March 2026 03:28:27 +0000 (0:00:00.333) 0:00:00.588 ********** 2026-03-29 03:29:08.888112 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-29 03:29:08.888117 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-29 03:29:08.888121 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-29 03:29:08.888134 | orchestrator | 2026-03-29 03:29:08.888143 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-29 03:29:08.888148 | orchestrator | 2026-03-29 03:29:08.888152 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 03:29:08.888156 | orchestrator | Sunday 29 March 2026 03:28:27 +0000 (0:00:00.433) 0:00:01.022 ********** 2026-03-29 03:29:08.888160 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:29:08.888165 | orchestrator | 2026-03-29 03:29:08.888169 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-29 03:29:08.888173 | orchestrator | Sunday 29 March 2026 03:28:28 +0000 (0:00:00.535) 0:00:01.558 ********** 2026-03-29 03:29:08.888178 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-29 03:29:08.888182 | orchestrator | 2026-03-29 03:29:08.888186 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-29 03:29:08.888190 | orchestrator | Sunday 29 March 2026 03:28:32 +0000 (0:00:03.719) 0:00:05.277 ********** 2026-03-29 03:29:08.888194 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-29 03:29:08.888198 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-29 03:29:08.888202 | orchestrator | 2026-03-29 03:29:08.888206 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-29 03:29:08.888210 | orchestrator | Sunday 29 March 2026 03:28:39 +0000 (0:00:06.819) 0:00:12.096 ********** 2026-03-29 03:29:08.888214 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:29:08.888218 | orchestrator | 2026-03-29 03:29:08.888222 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-29 03:29:08.888239 | orchestrator | Sunday 29 March 2026 03:28:42 +0000 (0:00:03.486) 0:00:15.583 ********** 2026-03-29 03:29:08.888243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:29:08.888248 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-29 03:29:08.888252 | orchestrator | 2026-03-29 03:29:08.888256 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-29 03:29:08.888260 | orchestrator | Sunday 29 March 2026 03:28:46 +0000 (0:00:04.301) 0:00:19.884 ********** 2026-03-29 03:29:08.888285 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:29:08.888296 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-29 03:29:08.888300 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-29 03:29:08.888304 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-29 03:29:08.888308 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-29 03:29:08.888312 | orchestrator | 2026-03-29 03:29:08.888316 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-29 03:29:08.888320 | orchestrator | Sunday 29 March 2026 03:29:03 +0000 (0:00:16.464) 0:00:36.349 ********** 2026-03-29 03:29:08.888324 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-29 03:29:08.888328 | orchestrator | 2026-03-29 03:29:08.888338 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-29 03:29:08.888342 | orchestrator | Sunday 29 March 2026 03:29:07 +0000 (0:00:03.904) 0:00:40.254 ********** 2026-03-29 03:29:08.888348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:08.888365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:08.888370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:08.888379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:08.888390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:08.888394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:08.888403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.911628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.911747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.911763 | orchestrator | 2026-03-29 03:29:14.911774 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-29 03:29:14.911785 | orchestrator | Sunday 29 March 2026 03:29:08 +0000 (0:00:01.647) 0:00:41.901 ********** 2026-03-29 03:29:14.911816 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-29 03:29:14.911826 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-29 03:29:14.911835 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-29 03:29:14.911843 | orchestrator | 2026-03-29 03:29:14.911853 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-29 03:29:14.911862 | orchestrator | Sunday 29 March 2026 03:29:10 +0000 (0:00:01.174) 0:00:43.076 ********** 2026-03-29 03:29:14.911883 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:14.911893 | orchestrator | 2026-03-29 03:29:14.911902 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-29 03:29:14.911911 | orchestrator | Sunday 29 March 2026 03:29:10 +0000 (0:00:00.342) 0:00:43.418 ********** 2026-03-29 03:29:14.911919 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:14.911928 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:29:14.911936 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:29:14.911945 | orchestrator | 2026-03-29 03:29:14.911954 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 03:29:14.911962 | orchestrator | Sunday 29 March 2026 03:29:10 +0000 (0:00:00.315) 0:00:43.734 ********** 2026-03-29 03:29:14.911972 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:29:14.911981 | orchestrator | 2026-03-29 03:29:14.911989 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-29 03:29:14.911998 | orchestrator | Sunday 29 March 2026 03:29:11 +0000 (0:00:00.590) 0:00:44.325 ********** 2026-03-29 03:29:14.912008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:14.912035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:14.912047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:14.912070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.912081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.912091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.912100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:14.912117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:16.301385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:16.301513 | orchestrator | 2026-03-29 03:29:16.301533 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-29 03:29:16.301546 | orchestrator | Sunday 29 March 2026 03:29:14 +0000 (0:00:03.599) 0:00:47.924 ********** 2026-03-29 03:29:16.301575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:16.301589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301613 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:16.301626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:16.301657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301689 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:29:16.301706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:16.301769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:16.301810 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:29:16.301841 | orchestrator | 2026-03-29 03:29:16.301880 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-29 03:29:16.301913 | orchestrator | Sunday 29 March 2026 03:29:15 +0000 (0:00:00.567) 0:00:48.492 ********** 2026-03-29 03:29:16.301946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:19.856799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.856911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:19.856924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.856931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.856939 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:29:19.856947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.856977 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:19.857002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:19.857012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.857019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:19.857025 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:29:19.857031 | orchestrator | 2026-03-29 03:29:19.857039 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-29 03:29:19.857048 | orchestrator | Sunday 29 March 2026 03:29:16 +0000 (0:00:00.831) 0:00:49.324 ********** 2026-03-29 03:29:19.857054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:19.857070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:19.857079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:29.356063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:29.356220 | orchestrator | 2026-03-29 03:29:29.356230 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-29 03:29:29.356243 | orchestrator | Sunday 29 March 2026 03:29:19 +0000 (0:00:03.550) 0:00:52.875 ********** 2026-03-29 03:29:29.356250 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:29:29.356258 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:29:29.356265 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:29:29.356271 | orchestrator | 2026-03-29 03:29:29.356290 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-29 03:29:29.356297 | orchestrator | Sunday 29 March 2026 03:29:21 +0000 (0:00:01.562) 0:00:54.437 ********** 2026-03-29 03:29:29.356304 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:29:29.356311 | orchestrator | 2026-03-29 03:29:29.356317 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-29 03:29:29.356324 | orchestrator | Sunday 29 March 2026 03:29:22 +0000 (0:00:00.930) 0:00:55.368 ********** 2026-03-29 03:29:29.356331 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:29.356337 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:29:29.356344 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:29:29.356350 | orchestrator | 2026-03-29 03:29:29.356357 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-29 03:29:29.356364 | orchestrator | Sunday 29 March 2026 03:29:22 +0000 (0:00:00.585) 0:00:55.953 ********** 2026-03-29 03:29:29.356403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:29.356419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:29.356426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:29.356438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.223852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.223974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.224075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.224093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.224105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:30.224117 | orchestrator | 2026-03-29 03:29:30.224130 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-29 03:29:30.224143 | orchestrator | Sunday 29 March 2026 03:29:29 +0000 (0:00:06.423) 0:01:02.376 ********** 2026-03-29 03:29:30.224178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:30.224208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:30.224229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:30.224268 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:29:30.224291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:30.224311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:30.224330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:30.224348 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:29:30.224391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 03:29:32.650685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:29:32.650932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:29:32.650964 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:29:32.650978 | orchestrator | 2026-03-29 03:29:32.650991 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-29 03:29:32.651003 | orchestrator | Sunday 29 March 2026 03:29:30 +0000 (0:00:00.862) 0:01:03.239 ********** 2026-03-29 03:29:32.651016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:32.651028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:32.651079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 03:29:32.651105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:29:32.651235 | orchestrator | 2026-03-29 03:29:32.651248 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 03:29:32.651269 | orchestrator | Sunday 29 March 2026 03:29:32 +0000 (0:00:02.424) 0:01:05.663 ********** 2026-03-29 03:30:12.178328 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:30:12.178452 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:30:12.178472 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:30:12.178486 | orchestrator | 2026-03-29 03:30:12.178501 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-29 03:30:12.178516 | orchestrator | Sunday 29 March 2026 03:29:32 +0000 (0:00:00.301) 0:01:05.964 ********** 2026-03-29 03:30:12.178530 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178538 | orchestrator | 2026-03-29 03:30:12.178547 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-29 03:30:12.178555 | orchestrator | Sunday 29 March 2026 03:29:35 +0000 (0:00:02.406) 0:01:08.371 ********** 2026-03-29 03:30:12.178563 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178571 | orchestrator | 2026-03-29 03:30:12.178579 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-29 03:30:12.178588 | orchestrator | Sunday 29 March 2026 03:29:37 +0000 (0:00:02.330) 0:01:10.702 ********** 2026-03-29 03:30:12.178595 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178603 | orchestrator | 2026-03-29 03:30:12.178611 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 03:30:12.178619 | orchestrator | Sunday 29 March 2026 03:29:49 +0000 (0:00:12.313) 0:01:23.015 ********** 2026-03-29 03:30:12.178627 | orchestrator | 2026-03-29 03:30:12.178635 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 03:30:12.178643 | orchestrator | Sunday 29 March 2026 03:29:50 +0000 (0:00:00.069) 0:01:23.085 ********** 2026-03-29 03:30:12.178651 | orchestrator | 2026-03-29 03:30:12.178659 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 03:30:12.178667 | orchestrator | Sunday 29 March 2026 03:29:50 +0000 (0:00:00.068) 0:01:23.154 ********** 2026-03-29 03:30:12.178675 | orchestrator | 2026-03-29 03:30:12.178783 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-29 03:30:12.178792 | orchestrator | Sunday 29 March 2026 03:29:50 +0000 (0:00:00.072) 0:01:23.226 ********** 2026-03-29 03:30:12.178800 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178809 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:30:12.178817 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:30:12.178825 | orchestrator | 2026-03-29 03:30:12.178833 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-29 03:30:12.178841 | orchestrator | Sunday 29 March 2026 03:29:56 +0000 (0:00:06.286) 0:01:29.513 ********** 2026-03-29 03:30:12.178849 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178857 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:30:12.178865 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:30:12.178875 | orchestrator | 2026-03-29 03:30:12.178884 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-29 03:30:12.178893 | orchestrator | Sunday 29 March 2026 03:30:06 +0000 (0:00:09.994) 0:01:39.507 ********** 2026-03-29 03:30:12.178902 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:30:12.178911 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:30:12.178920 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:30:12.178929 | orchestrator | 2026-03-29 03:30:12.178938 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:30:12.178949 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:30:12.178960 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:30:12.178995 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:30:12.179009 | orchestrator | 2026-03-29 03:30:12.179023 | orchestrator | 2026-03-29 03:30:12.179036 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:30:12.179050 | orchestrator | Sunday 29 March 2026 03:30:11 +0000 (0:00:05.330) 0:01:44.837 ********** 2026-03-29 03:30:12.179063 | orchestrator | =============================================================================== 2026-03-29 03:30:12.179077 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.46s 2026-03-29 03:30:12.179090 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.31s 2026-03-29 03:30:12.179104 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.99s 2026-03-29 03:30:12.179117 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.82s 2026-03-29 03:30:12.179131 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.42s 2026-03-29 03:30:12.179144 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.29s 2026-03-29 03:30:12.179157 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.33s 2026-03-29 03:30:12.179171 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.30s 2026-03-29 03:30:12.179184 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.90s 2026-03-29 03:30:12.179198 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.72s 2026-03-29 03:30:12.179213 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.60s 2026-03-29 03:30:12.179245 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2026-03-29 03:30:12.179261 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.49s 2026-03-29 03:30:12.179274 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.42s 2026-03-29 03:30:12.179288 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.41s 2026-03-29 03:30:12.179315 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.33s 2026-03-29 03:30:12.179324 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.65s 2026-03-29 03:30:12.179331 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.56s 2026-03-29 03:30:12.179339 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.17s 2026-03-29 03:30:12.179347 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.93s 2026-03-29 03:30:14.480120 | orchestrator | 2026-03-29 03:30:14 | INFO  | Task dd9de13a-3816-42b6-b1d0-7ee1425cd954 (designate) was prepared for execution. 2026-03-29 03:30:14.480218 | orchestrator | 2026-03-29 03:30:14 | INFO  | It takes a moment until task dd9de13a-3816-42b6-b1d0-7ee1425cd954 (designate) has been started and output is visible here. 2026-03-29 03:30:47.243269 | orchestrator | 2026-03-29 03:30:47.243363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:30:47.243375 | orchestrator | 2026-03-29 03:30:47.243382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:30:47.243389 | orchestrator | Sunday 29 March 2026 03:30:18 +0000 (0:00:00.273) 0:00:00.274 ********** 2026-03-29 03:30:47.243396 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:30:47.243404 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:30:47.243411 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:30:47.243417 | orchestrator | 2026-03-29 03:30:47.243424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:30:47.243430 | orchestrator | Sunday 29 March 2026 03:30:19 +0000 (0:00:00.309) 0:00:00.583 ********** 2026-03-29 03:30:47.243437 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-29 03:30:47.243465 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-29 03:30:47.243472 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-29 03:30:47.243478 | orchestrator | 2026-03-29 03:30:47.243485 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-29 03:30:47.243491 | orchestrator | 2026-03-29 03:30:47.243497 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 03:30:47.243503 | orchestrator | Sunday 29 March 2026 03:30:19 +0000 (0:00:00.460) 0:00:01.044 ********** 2026-03-29 03:30:47.243511 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:30:47.243518 | orchestrator | 2026-03-29 03:30:47.243524 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-29 03:30:47.243530 | orchestrator | Sunday 29 March 2026 03:30:20 +0000 (0:00:00.563) 0:00:01.607 ********** 2026-03-29 03:30:47.243537 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-29 03:30:47.243543 | orchestrator | 2026-03-29 03:30:47.243549 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-29 03:30:47.243556 | orchestrator | Sunday 29 March 2026 03:30:23 +0000 (0:00:03.594) 0:00:05.202 ********** 2026-03-29 03:30:47.243562 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-29 03:30:47.243568 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-29 03:30:47.243575 | orchestrator | 2026-03-29 03:30:47.243581 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-29 03:30:47.243587 | orchestrator | Sunday 29 March 2026 03:30:30 +0000 (0:00:06.694) 0:00:11.897 ********** 2026-03-29 03:30:47.243593 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:30:47.243600 | orchestrator | 2026-03-29 03:30:47.243606 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-29 03:30:47.243612 | orchestrator | Sunday 29 March 2026 03:30:33 +0000 (0:00:03.332) 0:00:15.229 ********** 2026-03-29 03:30:47.243619 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:30:47.243625 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-29 03:30:47.243631 | orchestrator | 2026-03-29 03:30:47.243638 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-29 03:30:47.243645 | orchestrator | Sunday 29 March 2026 03:30:38 +0000 (0:00:04.316) 0:00:19.545 ********** 2026-03-29 03:30:47.243703 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:30:47.243711 | orchestrator | 2026-03-29 03:30:47.243717 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-29 03:30:47.243723 | orchestrator | Sunday 29 March 2026 03:30:41 +0000 (0:00:03.360) 0:00:22.905 ********** 2026-03-29 03:30:47.243729 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-29 03:30:47.243735 | orchestrator | 2026-03-29 03:30:47.243742 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-29 03:30:47.243748 | orchestrator | Sunday 29 March 2026 03:30:45 +0000 (0:00:03.749) 0:00:26.655 ********** 2026-03-29 03:30:47.243770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:47.243803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:47.243810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:47.243817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:47.243825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:47.243836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:47.243843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:47.243860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:53.546581 | orchestrator | 2026-03-29 03:30:53.546591 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-29 03:30:53.546600 | orchestrator | Sunday 29 March 2026 03:30:48 +0000 (0:00:02.855) 0:00:29.510 ********** 2026-03-29 03:30:53.546621 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:30:53.546632 | orchestrator | 2026-03-29 03:30:53.546640 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-29 03:30:53.546720 | orchestrator | Sunday 29 March 2026 03:30:48 +0000 (0:00:00.146) 0:00:29.656 ********** 2026-03-29 03:30:53.546729 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:30:53.546744 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:30:53.546752 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:30:53.546760 | orchestrator | 2026-03-29 03:30:53.546768 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 03:30:53.546778 | orchestrator | Sunday 29 March 2026 03:30:48 +0000 (0:00:00.530) 0:00:30.187 ********** 2026-03-29 03:30:53.546799 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:30:53.546813 | orchestrator | 2026-03-29 03:30:53.546826 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-29 03:30:53.546838 | orchestrator | Sunday 29 March 2026 03:30:49 +0000 (0:00:00.580) 0:00:30.768 ********** 2026-03-29 03:30:53.546851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:53.546875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:55.433199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:30:55.433291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:55.433459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:56.269741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:56.269840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:56.269878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:30:56.269889 | orchestrator | 2026-03-29 03:30:56.269915 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-29 03:30:56.269926 | orchestrator | Sunday 29 March 2026 03:30:55 +0000 (0:00:06.139) 0:00:36.907 ********** 2026-03-29 03:30:56.269938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:56.269950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:56.269979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:56.269990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:56.270007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:30:56.270069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:30:56.270087 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:30:56.270100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:56.270109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:56.270119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:56.270136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014180 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:30:57.014206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:57.014213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:57.014219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014267 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:30:57.014271 | orchestrator | 2026-03-29 03:30:57.014276 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-29 03:30:57.014285 | orchestrator | Sunday 29 March 2026 03:30:56 +0000 (0:00:00.950) 0:00:37.858 ********** 2026-03-29 03:30:57.014289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:57.014293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:57.014297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.014314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348724 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:30:57.348748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:57.348785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:57.348795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348861 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:30:57.348872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:30:57.348879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:30:57.348885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:30:57.348908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:01.815387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:31:01.815502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:31:01.815518 | orchestrator | 2026-03-29 03:31:01.815529 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-29 03:31:01.815555 | orchestrator | Sunday 29 March 2026 03:30:57 +0000 (0:00:00.965) 0:00:38.823 ********** 2026-03-29 03:31:01.815574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:01.815598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:01.815698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:01.815737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:01.815861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.493786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.493954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.493971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.493998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.494005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.494050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.494080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:13.494092 | orchestrator | 2026-03-29 03:31:13.494104 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-29 03:31:13.494116 | orchestrator | Sunday 29 March 2026 03:31:03 +0000 (0:00:06.289) 0:00:45.113 ********** 2026-03-29 03:31:13.494135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:13.494148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:13.494168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:13.494180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:13.494201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.743991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.744005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.744011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.744018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.744025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:21.744032 | orchestrator | 2026-03-29 03:31:21.744041 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-29 03:31:21.744049 | orchestrator | Sunday 29 March 2026 03:31:18 +0000 (0:00:14.425) 0:00:59.538 ********** 2026-03-29 03:31:21.744059 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 03:31:26.031727 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 03:31:26.031817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 03:31:26.031828 | orchestrator | 2026-03-29 03:31:26.031852 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-29 03:31:26.031860 | orchestrator | Sunday 29 March 2026 03:31:21 +0000 (0:00:03.681) 0:01:03.220 ********** 2026-03-29 03:31:26.031867 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 03:31:26.031894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 03:31:26.031902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 03:31:26.031909 | orchestrator | 2026-03-29 03:31:26.031916 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-29 03:31:26.031923 | orchestrator | Sunday 29 March 2026 03:31:24 +0000 (0:00:02.438) 0:01:05.659 ********** 2026-03-29 03:31:26.031933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:26.031943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:26.031951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:26.031972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:26.031986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:26.032000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:26.032013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:26.032025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:26.032038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:26.032055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:26.032078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:28.963923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:28.964035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:28.964052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:28.964065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:28.964078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:28.964090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:28.964149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:28.964164 | orchestrator | 2026-03-29 03:31:28.964176 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-29 03:31:28.964189 | orchestrator | Sunday 29 March 2026 03:31:27 +0000 (0:00:03.000) 0:01:08.659 ********** 2026-03-29 03:31:28.964201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:28.964215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:28.964226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:28.964238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:28.964270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.024929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:30.025044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:30.025052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.025184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:30.025194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:30.025212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:30.025219 | orchestrator | 2026-03-29 03:31:30.025226 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 03:31:30.025240 | orchestrator | Sunday 29 March 2026 03:31:30 +0000 (0:00:02.828) 0:01:11.487 ********** 2026-03-29 03:31:30.985054 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:31:30.985146 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:31:30.985157 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:31:30.985165 | orchestrator | 2026-03-29 03:31:30.985173 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-29 03:31:30.985181 | orchestrator | Sunday 29 March 2026 03:31:30 +0000 (0:00:00.318) 0:01:11.806 ********** 2026-03-29 03:31:30.985192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:30.985203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:31:30.985211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985294 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:31:30.985301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:30.985307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:31:30.985314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:30.985350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:31:34.428473 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:31:34.428568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 03:31:34.428582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 03:31:34.428591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 03:31:34.428684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 03:31:34.428695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 03:31:34.428720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:31:34.428727 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:31:34.428734 | orchestrator | 2026-03-29 03:31:34.428755 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-29 03:31:34.428763 | orchestrator | Sunday 29 March 2026 03:31:31 +0000 (0:00:00.759) 0:01:12.565 ********** 2026-03-29 03:31:34.428770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:34.428778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:34.428792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 03:31:34.428799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:34.428815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.346486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:31:36.347713 | orchestrator | 2026-03-29 03:31:36.347723 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 03:31:36.347733 | orchestrator | Sunday 29 March 2026 03:31:36 +0000 (0:00:04.941) 0:01:17.507 ********** 2026-03-29 03:31:36.347742 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:31:36.347757 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:32:49.626908 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:32:49.627062 | orchestrator | 2026-03-29 03:32:49.627090 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-29 03:32:49.627111 | orchestrator | Sunday 29 March 2026 03:31:36 +0000 (0:00:00.315) 0:01:17.822 ********** 2026-03-29 03:32:49.627131 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-29 03:32:49.627149 | orchestrator | 2026-03-29 03:32:49.627167 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-29 03:32:49.627204 | orchestrator | Sunday 29 March 2026 03:31:38 +0000 (0:00:02.199) 0:01:20.022 ********** 2026-03-29 03:32:49.627236 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 03:32:49.627254 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-29 03:32:49.627274 | orchestrator | 2026-03-29 03:32:49.627292 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-29 03:32:49.627309 | orchestrator | Sunday 29 March 2026 03:31:40 +0000 (0:00:02.314) 0:01:22.337 ********** 2026-03-29 03:32:49.627364 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627385 | orchestrator | 2026-03-29 03:32:49.627404 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 03:32:49.627423 | orchestrator | Sunday 29 March 2026 03:31:56 +0000 (0:00:15.956) 0:01:38.293 ********** 2026-03-29 03:32:49.627440 | orchestrator | 2026-03-29 03:32:49.627453 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 03:32:49.627466 | orchestrator | Sunday 29 March 2026 03:31:56 +0000 (0:00:00.070) 0:01:38.364 ********** 2026-03-29 03:32:49.627479 | orchestrator | 2026-03-29 03:32:49.627493 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 03:32:49.627505 | orchestrator | Sunday 29 March 2026 03:31:56 +0000 (0:00:00.070) 0:01:38.434 ********** 2026-03-29 03:32:49.627517 | orchestrator | 2026-03-29 03:32:49.627530 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-29 03:32:49.627542 | orchestrator | Sunday 29 March 2026 03:31:57 +0000 (0:00:00.070) 0:01:38.505 ********** 2026-03-29 03:32:49.627555 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627600 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.627619 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.627632 | orchestrator | 2026-03-29 03:32:49.627645 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-29 03:32:49.627662 | orchestrator | Sunday 29 March 2026 03:32:09 +0000 (0:00:12.695) 0:01:51.201 ********** 2026-03-29 03:32:49.627689 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.627710 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.627729 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627746 | orchestrator | 2026-03-29 03:32:49.627764 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-29 03:32:49.627779 | orchestrator | Sunday 29 March 2026 03:32:18 +0000 (0:00:08.614) 0:01:59.815 ********** 2026-03-29 03:32:49.627794 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627812 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.627831 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.627849 | orchestrator | 2026-03-29 03:32:49.627869 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-29 03:32:49.627887 | orchestrator | Sunday 29 March 2026 03:32:23 +0000 (0:00:05.616) 0:02:05.431 ********** 2026-03-29 03:32:49.627905 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627916 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.627927 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.627938 | orchestrator | 2026-03-29 03:32:49.627949 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-29 03:32:49.627960 | orchestrator | Sunday 29 March 2026 03:32:29 +0000 (0:00:05.684) 0:02:11.116 ********** 2026-03-29 03:32:49.627970 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.627981 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.627992 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.628002 | orchestrator | 2026-03-29 03:32:49.628013 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-29 03:32:49.628029 | orchestrator | Sunday 29 March 2026 03:32:35 +0000 (0:00:05.796) 0:02:16.913 ********** 2026-03-29 03:32:49.628047 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.628064 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:32:49.628082 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:32:49.628099 | orchestrator | 2026-03-29 03:32:49.628116 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-29 03:32:49.628133 | orchestrator | Sunday 29 March 2026 03:32:41 +0000 (0:00:05.899) 0:02:22.813 ********** 2026-03-29 03:32:49.628152 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:32:49.628171 | orchestrator | 2026-03-29 03:32:49.628190 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:32:49.628209 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:32:49.628249 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:32:49.628268 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:32:49.628286 | orchestrator | 2026-03-29 03:32:49.628305 | orchestrator | 2026-03-29 03:32:49.628317 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:32:49.628345 | orchestrator | Sunday 29 March 2026 03:32:49 +0000 (0:00:07.908) 0:02:30.721 ********** 2026-03-29 03:32:49.628357 | orchestrator | =============================================================================== 2026-03-29 03:32:49.628368 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.96s 2026-03-29 03:32:49.628378 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.43s 2026-03-29 03:32:49.628411 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.70s 2026-03-29 03:32:49.628423 | orchestrator | designate : Restart designate-api container ----------------------------- 8.61s 2026-03-29 03:32:49.628434 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.91s 2026-03-29 03:32:49.628444 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.70s 2026-03-29 03:32:49.628455 | orchestrator | designate : Copying over config.json files for services ----------------- 6.29s 2026-03-29 03:32:49.628466 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.14s 2026-03-29 03:32:49.628476 | orchestrator | designate : Restart designate-worker container -------------------------- 5.90s 2026-03-29 03:32:49.628487 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.80s 2026-03-29 03:32:49.628497 | orchestrator | designate : Restart designate-producer container ------------------------ 5.68s 2026-03-29 03:32:49.628508 | orchestrator | designate : Restart designate-central container ------------------------- 5.62s 2026-03-29 03:32:49.628519 | orchestrator | designate : Check designate containers ---------------------------------- 4.94s 2026-03-29 03:32:49.628529 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.32s 2026-03-29 03:32:49.628540 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.75s 2026-03-29 03:32:49.628551 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.68s 2026-03-29 03:32:49.628587 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.59s 2026-03-29 03:32:49.628599 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.36s 2026-03-29 03:32:49.628610 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.33s 2026-03-29 03:32:49.628621 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.00s 2026-03-29 03:32:51.969427 | orchestrator | 2026-03-29 03:32:51 | INFO  | Task defdf83e-f6c2-4760-bf0f-b4c130d126ad (octavia) was prepared for execution. 2026-03-29 03:32:51.969518 | orchestrator | 2026-03-29 03:32:51 | INFO  | It takes a moment until task defdf83e-f6c2-4760-bf0f-b4c130d126ad (octavia) has been started and output is visible here. 2026-03-29 03:35:04.258802 | orchestrator | 2026-03-29 03:35:04.258901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:35:04.258911 | orchestrator | 2026-03-29 03:35:04.258917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:35:04.258925 | orchestrator | Sunday 29 March 2026 03:32:56 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-03-29 03:35:04.258932 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:04.258939 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:35:04.258946 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:35:04.258952 | orchestrator | 2026-03-29 03:35:04.258959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:35:04.258987 | orchestrator | Sunday 29 March 2026 03:32:56 +0000 (0:00:00.324) 0:00:00.620 ********** 2026-03-29 03:35:04.258993 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-29 03:35:04.259000 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-29 03:35:04.259006 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-29 03:35:04.259013 | orchestrator | 2026-03-29 03:35:04.259019 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-29 03:35:04.259026 | orchestrator | 2026-03-29 03:35:04.259032 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:35:04.259039 | orchestrator | Sunday 29 March 2026 03:32:56 +0000 (0:00:00.433) 0:00:01.053 ********** 2026-03-29 03:35:04.259046 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:35:04.259053 | orchestrator | 2026-03-29 03:35:04.259059 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-29 03:35:04.259066 | orchestrator | Sunday 29 March 2026 03:32:57 +0000 (0:00:00.551) 0:00:01.604 ********** 2026-03-29 03:35:04.259073 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-29 03:35:04.259079 | orchestrator | 2026-03-29 03:35:04.259085 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-29 03:35:04.259091 | orchestrator | Sunday 29 March 2026 03:33:01 +0000 (0:00:03.785) 0:00:05.390 ********** 2026-03-29 03:35:04.259097 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-29 03:35:04.259104 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-29 03:35:04.259110 | orchestrator | 2026-03-29 03:35:04.259117 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-29 03:35:04.259123 | orchestrator | Sunday 29 March 2026 03:33:08 +0000 (0:00:06.718) 0:00:12.108 ********** 2026-03-29 03:35:04.259130 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:35:04.259136 | orchestrator | 2026-03-29 03:35:04.259142 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-29 03:35:04.259149 | orchestrator | Sunday 29 March 2026 03:33:11 +0000 (0:00:03.372) 0:00:15.481 ********** 2026-03-29 03:35:04.259167 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:35:04.259174 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 03:35:04.259181 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 03:35:04.259188 | orchestrator | 2026-03-29 03:35:04.259195 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-29 03:35:04.259201 | orchestrator | Sunday 29 March 2026 03:33:19 +0000 (0:00:08.567) 0:00:24.049 ********** 2026-03-29 03:35:04.259207 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:35:04.259213 | orchestrator | 2026-03-29 03:35:04.259220 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-29 03:35:04.259226 | orchestrator | Sunday 29 March 2026 03:33:23 +0000 (0:00:03.656) 0:00:27.706 ********** 2026-03-29 03:35:04.259232 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 03:35:04.259238 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 03:35:04.259244 | orchestrator | 2026-03-29 03:35:04.259251 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-29 03:35:04.259256 | orchestrator | Sunday 29 March 2026 03:33:31 +0000 (0:00:07.770) 0:00:35.476 ********** 2026-03-29 03:35:04.259262 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-29 03:35:04.259268 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-29 03:35:04.259275 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-29 03:35:04.259281 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-29 03:35:04.259294 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-29 03:35:04.259300 | orchestrator | 2026-03-29 03:35:04.259306 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:35:04.259313 | orchestrator | Sunday 29 March 2026 03:33:48 +0000 (0:00:16.661) 0:00:52.137 ********** 2026-03-29 03:35:04.259319 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:35:04.259325 | orchestrator | 2026-03-29 03:35:04.259331 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-29 03:35:04.259339 | orchestrator | Sunday 29 March 2026 03:33:48 +0000 (0:00:00.744) 0:00:52.882 ********** 2026-03-29 03:35:04.259345 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259351 | orchestrator | 2026-03-29 03:35:04.259357 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-29 03:35:04.259364 | orchestrator | Sunday 29 March 2026 03:33:53 +0000 (0:00:04.771) 0:00:57.654 ********** 2026-03-29 03:35:04.259370 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259377 | orchestrator | 2026-03-29 03:35:04.259384 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 03:35:04.259405 | orchestrator | Sunday 29 March 2026 03:33:58 +0000 (0:00:04.587) 0:01:02.241 ********** 2026-03-29 03:35:04.259411 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:04.259418 | orchestrator | 2026-03-29 03:35:04.259424 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-29 03:35:04.259431 | orchestrator | Sunday 29 March 2026 03:34:01 +0000 (0:00:03.357) 0:01:05.599 ********** 2026-03-29 03:35:04.259437 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 03:35:04.259444 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 03:35:04.259450 | orchestrator | 2026-03-29 03:35:04.259457 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-29 03:35:04.259463 | orchestrator | Sunday 29 March 2026 03:34:11 +0000 (0:00:10.368) 0:01:15.968 ********** 2026-03-29 03:35:04.259527 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-29 03:35:04.259533 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-29 03:35:04.259541 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-29 03:35:04.259547 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-29 03:35:04.259552 | orchestrator | 2026-03-29 03:35:04.259557 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-29 03:35:04.259561 | orchestrator | Sunday 29 March 2026 03:34:29 +0000 (0:00:17.652) 0:01:33.620 ********** 2026-03-29 03:35:04.259565 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259570 | orchestrator | 2026-03-29 03:35:04.259574 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-29 03:35:04.259578 | orchestrator | Sunday 29 March 2026 03:34:34 +0000 (0:00:04.937) 0:01:38.558 ********** 2026-03-29 03:35:04.259583 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259587 | orchestrator | 2026-03-29 03:35:04.259591 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-29 03:35:04.259596 | orchestrator | Sunday 29 March 2026 03:34:40 +0000 (0:00:05.664) 0:01:44.222 ********** 2026-03-29 03:35:04.259600 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:04.259604 | orchestrator | 2026-03-29 03:35:04.259609 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-29 03:35:04.259613 | orchestrator | Sunday 29 March 2026 03:34:40 +0000 (0:00:00.223) 0:01:44.446 ********** 2026-03-29 03:35:04.259617 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:04.259627 | orchestrator | 2026-03-29 03:35:04.259631 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:35:04.259640 | orchestrator | Sunday 29 March 2026 03:34:45 +0000 (0:00:04.744) 0:01:49.191 ********** 2026-03-29 03:35:04.259645 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:35:04.259650 | orchestrator | 2026-03-29 03:35:04.259654 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-29 03:35:04.259659 | orchestrator | Sunday 29 March 2026 03:34:46 +0000 (0:00:01.233) 0:01:50.424 ********** 2026-03-29 03:35:04.259663 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259667 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259672 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259676 | orchestrator | 2026-03-29 03:35:04.259681 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-29 03:35:04.259685 | orchestrator | Sunday 29 March 2026 03:34:51 +0000 (0:00:05.385) 0:01:55.810 ********** 2026-03-29 03:35:04.259690 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259694 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259698 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259703 | orchestrator | 2026-03-29 03:35:04.259707 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-29 03:35:04.259712 | orchestrator | Sunday 29 March 2026 03:34:56 +0000 (0:00:04.934) 0:02:00.744 ********** 2026-03-29 03:35:04.259716 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259720 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259725 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259729 | orchestrator | 2026-03-29 03:35:04.259734 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-29 03:35:04.259738 | orchestrator | Sunday 29 March 2026 03:34:57 +0000 (0:00:01.031) 0:02:01.776 ********** 2026-03-29 03:35:04.259741 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:35:04.259745 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:35:04.259749 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:04.259753 | orchestrator | 2026-03-29 03:35:04.259756 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-29 03:35:04.259760 | orchestrator | Sunday 29 March 2026 03:34:59 +0000 (0:00:01.850) 0:02:03.626 ********** 2026-03-29 03:35:04.259764 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259768 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259772 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259775 | orchestrator | 2026-03-29 03:35:04.259779 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-29 03:35:04.259783 | orchestrator | Sunday 29 March 2026 03:35:00 +0000 (0:00:01.295) 0:02:04.922 ********** 2026-03-29 03:35:04.259787 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259790 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259794 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259798 | orchestrator | 2026-03-29 03:35:04.259802 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-29 03:35:04.259805 | orchestrator | Sunday 29 March 2026 03:35:02 +0000 (0:00:01.188) 0:02:06.111 ********** 2026-03-29 03:35:04.259809 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:04.259813 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:04.259817 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:04.259820 | orchestrator | 2026-03-29 03:35:04.259829 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-29 03:35:31.093424 | orchestrator | Sunday 29 March 2026 03:35:04 +0000 (0:00:02.229) 0:02:08.340 ********** 2026-03-29 03:35:31.093628 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:35:31.093649 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:35:31.093660 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:35:31.093671 | orchestrator | 2026-03-29 03:35:31.093682 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-29 03:35:31.093714 | orchestrator | Sunday 29 March 2026 03:35:05 +0000 (0:00:01.511) 0:02:09.852 ********** 2026-03-29 03:35:31.093721 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:35:31.093728 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093734 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:35:31.093739 | orchestrator | 2026-03-29 03:35:31.093746 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-29 03:35:31.093752 | orchestrator | Sunday 29 March 2026 03:35:06 +0000 (0:00:00.629) 0:02:10.481 ********** 2026-03-29 03:35:31.093758 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:35:31.093763 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093769 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:35:31.093775 | orchestrator | 2026-03-29 03:35:31.093781 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:35:31.093787 | orchestrator | Sunday 29 March 2026 03:35:09 +0000 (0:00:03.054) 0:02:13.536 ********** 2026-03-29 03:35:31.093793 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:35:31.093799 | orchestrator | 2026-03-29 03:35:31.093805 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-29 03:35:31.093811 | orchestrator | Sunday 29 March 2026 03:35:10 +0000 (0:00:00.573) 0:02:14.110 ********** 2026-03-29 03:35:31.093816 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093824 | orchestrator | 2026-03-29 03:35:31.093832 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 03:35:31.093838 | orchestrator | Sunday 29 March 2026 03:35:14 +0000 (0:00:04.162) 0:02:18.272 ********** 2026-03-29 03:35:31.093844 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093850 | orchestrator | 2026-03-29 03:35:31.093855 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-29 03:35:31.093862 | orchestrator | Sunday 29 March 2026 03:35:17 +0000 (0:00:03.273) 0:02:21.546 ********** 2026-03-29 03:35:31.093868 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 03:35:31.093874 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 03:35:31.093880 | orchestrator | 2026-03-29 03:35:31.093886 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-29 03:35:31.093891 | orchestrator | Sunday 29 March 2026 03:35:25 +0000 (0:00:07.671) 0:02:29.217 ********** 2026-03-29 03:35:31.093897 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093903 | orchestrator | 2026-03-29 03:35:31.093909 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-29 03:35:31.093926 | orchestrator | Sunday 29 March 2026 03:35:28 +0000 (0:00:03.493) 0:02:32.710 ********** 2026-03-29 03:35:31.093932 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:35:31.093938 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:35:31.093944 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:35:31.093949 | orchestrator | 2026-03-29 03:35:31.093955 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-29 03:35:31.093963 | orchestrator | Sunday 29 March 2026 03:35:29 +0000 (0:00:00.489) 0:02:33.200 ********** 2026-03-29 03:35:31.093977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:31.094062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:31.094075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:31.094084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:31.094096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:31.094110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:31.094117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:31.094132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:31.094147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:32.532926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:32.533059 | orchestrator | 2026-03-29 03:35:32.533065 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-29 03:35:32.533071 | orchestrator | Sunday 29 March 2026 03:35:31 +0000 (0:00:02.413) 0:02:35.613 ********** 2026-03-29 03:35:32.533076 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:32.533081 | orchestrator | 2026-03-29 03:35:32.533086 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-29 03:35:32.533090 | orchestrator | Sunday 29 March 2026 03:35:31 +0000 (0:00:00.138) 0:02:35.752 ********** 2026-03-29 03:35:32.533095 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:32.533110 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:35:32.533115 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:35:32.533127 | orchestrator | 2026-03-29 03:35:32.533132 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-29 03:35:32.533137 | orchestrator | Sunday 29 March 2026 03:35:31 +0000 (0:00:00.295) 0:02:36.047 ********** 2026-03-29 03:35:32.533143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:32.533153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:32.533159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:32.533169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:32.533173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:32.533178 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:32.533187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:37.359383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:37.359616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:37.359653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:37.359707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:37.359727 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:35:37.359747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:37.359765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:37.359808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:37.359827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:37.359868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:37.359887 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:35:37.359905 | orchestrator | 2026-03-29 03:35:37.359924 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:35:37.359942 | orchestrator | Sunday 29 March 2026 03:35:32 +0000 (0:00:00.665) 0:02:36.712 ********** 2026-03-29 03:35:37.359961 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:35:37.359978 | orchestrator | 2026-03-29 03:35:37.359995 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-29 03:35:37.360014 | orchestrator | Sunday 29 March 2026 03:35:33 +0000 (0:00:00.728) 0:02:37.441 ********** 2026-03-29 03:35:37.360031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:37.360049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:37.360079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:38.873045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:38.873198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:38.873229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:38.873252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:38.873551 | orchestrator | 2026-03-29 03:35:38.873576 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-29 03:35:38.873603 | orchestrator | Sunday 29 March 2026 03:35:38 +0000 (0:00:04.967) 0:02:42.408 ********** 2026-03-29 03:35:38.873645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:38.968578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:38.968670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:38.968682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:38.968692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:38.968699 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:38.968708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:38.968734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:38.968757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:38.968761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:38.968765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:38.968769 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:35:38.968773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:38.968777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:38.968785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:38.968796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:39.722669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:39.722750 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:35:39.722761 | orchestrator | 2026-03-29 03:35:39.722770 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-29 03:35:39.722778 | orchestrator | Sunday 29 March 2026 03:35:38 +0000 (0:00:00.648) 0:02:43.056 ********** 2026-03-29 03:35:39.722786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:39.722794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:39.722801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:39.722832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:39.722864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:39.722872 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:35:39.722879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:39.722885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:39.722892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:39.722904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:39.722911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:39.722921 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:35:39.722943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 03:35:44.342074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 03:35:44.342181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 03:35:44.342197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 03:35:44.342233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 03:35:44.342244 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:35:44.342256 | orchestrator | 2026-03-29 03:35:44.342267 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-29 03:35:44.342278 | orchestrator | Sunday 29 March 2026 03:35:40 +0000 (0:00:01.223) 0:02:44.280 ********** 2026-03-29 03:35:44.342288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:44.342327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:44.342338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:44.342349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:44.342366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:44.342375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:35:44.342385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:44.342404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:35:59.964521 | orchestrator | 2026-03-29 03:35:59.964535 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-29 03:35:59.964548 | orchestrator | Sunday 29 March 2026 03:35:45 +0000 (0:00:05.112) 0:02:49.393 ********** 2026-03-29 03:35:59.964560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 03:35:59.964572 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 03:35:59.964583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 03:35:59.964593 | orchestrator | 2026-03-29 03:35:59.964605 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-29 03:35:59.964627 | orchestrator | Sunday 29 March 2026 03:35:46 +0000 (0:00:01.635) 0:02:51.028 ********** 2026-03-29 03:35:59.964640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:59.964653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:59.964671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:35:59.964690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:14.902228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:14.902378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:14.902397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:36:14.902580 | orchestrator | 2026-03-29 03:36:14.902594 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-29 03:36:14.902608 | orchestrator | Sunday 29 March 2026 03:36:03 +0000 (0:00:16.181) 0:03:07.209 ********** 2026-03-29 03:36:14.902622 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:36:14.902636 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:36:14.902649 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:36:14.902660 | orchestrator | 2026-03-29 03:36:14.902673 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-29 03:36:14.902686 | orchestrator | Sunday 29 March 2026 03:36:04 +0000 (0:00:01.746) 0:03:08.956 ********** 2026-03-29 03:36:14.902699 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 03:36:14.902712 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 03:36:14.902725 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 03:36:14.902738 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 03:36:14.902754 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 03:36:14.902768 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 03:36:14.902803 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 03:36:14.902818 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 03:36:14.902834 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 03:36:14.902850 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 03:36:14.902864 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 03:36:14.902888 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 03:36:14.902902 | orchestrator | 2026-03-29 03:36:14.902917 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-29 03:36:14.902931 | orchestrator | Sunday 29 March 2026 03:36:09 +0000 (0:00:04.959) 0:03:13.916 ********** 2026-03-29 03:36:14.902945 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 03:36:14.902958 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 03:36:14.902980 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 03:36:23.411792 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.411889 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.411898 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.411905 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.411911 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.411917 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.411924 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 03:36:23.411930 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 03:36:23.411936 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 03:36:23.411943 | orchestrator | 2026-03-29 03:36:23.411950 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-29 03:36:23.411957 | orchestrator | Sunday 29 March 2026 03:36:14 +0000 (0:00:05.065) 0:03:18.982 ********** 2026-03-29 03:36:23.411963 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 03:36:23.411970 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 03:36:23.411976 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 03:36:23.411982 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.411988 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.411995 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 03:36:23.412001 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.412007 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.412013 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 03:36:23.412019 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 03:36:23.412025 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 03:36:23.412031 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 03:36:23.412037 | orchestrator | 2026-03-29 03:36:23.412044 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-29 03:36:23.412050 | orchestrator | Sunday 29 March 2026 03:36:20 +0000 (0:00:05.223) 0:03:24.206 ********** 2026-03-29 03:36:23.412060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:36:23.412100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:36:23.412125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 03:36:23.412134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:23.412141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:23.412147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 03:36:23.412154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:23.412168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:23.412179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 03:36:23.412190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:37:47.463768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:37:47.463930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 03:37:47.463945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:37:47.463974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:37:47.463995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 03:37:47.464002 | orchestrator | 2026-03-29 03:37:47.464010 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 03:37:47.464019 | orchestrator | Sunday 29 March 2026 03:36:24 +0000 (0:00:04.164) 0:03:28.370 ********** 2026-03-29 03:37:47.464025 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:37:47.464032 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:37:47.464037 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:37:47.464042 | orchestrator | 2026-03-29 03:37:47.464048 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-29 03:37:47.464053 | orchestrator | Sunday 29 March 2026 03:36:24 +0000 (0:00:00.311) 0:03:28.682 ********** 2026-03-29 03:37:47.464059 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464064 | orchestrator | 2026-03-29 03:37:47.464069 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-29 03:37:47.464074 | orchestrator | Sunday 29 March 2026 03:36:26 +0000 (0:00:02.263) 0:03:30.946 ********** 2026-03-29 03:37:47.464080 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464085 | orchestrator | 2026-03-29 03:37:47.464091 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-29 03:37:47.464096 | orchestrator | Sunday 29 March 2026 03:36:29 +0000 (0:00:02.311) 0:03:33.257 ********** 2026-03-29 03:37:47.464102 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464107 | orchestrator | 2026-03-29 03:37:47.464112 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-29 03:37:47.464118 | orchestrator | Sunday 29 March 2026 03:36:31 +0000 (0:00:02.415) 0:03:35.673 ********** 2026-03-29 03:37:47.464140 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464146 | orchestrator | 2026-03-29 03:37:47.464152 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-29 03:37:47.464157 | orchestrator | Sunday 29 March 2026 03:36:33 +0000 (0:00:02.397) 0:03:38.070 ********** 2026-03-29 03:37:47.464163 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464168 | orchestrator | 2026-03-29 03:37:47.464173 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 03:37:47.464179 | orchestrator | Sunday 29 March 2026 03:36:57 +0000 (0:00:23.459) 0:04:01.529 ********** 2026-03-29 03:37:47.464184 | orchestrator | 2026-03-29 03:37:47.464189 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 03:37:47.464194 | orchestrator | Sunday 29 March 2026 03:36:57 +0000 (0:00:00.089) 0:04:01.619 ********** 2026-03-29 03:37:47.464200 | orchestrator | 2026-03-29 03:37:47.464206 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 03:37:47.464211 | orchestrator | Sunday 29 March 2026 03:36:57 +0000 (0:00:00.068) 0:04:01.688 ********** 2026-03-29 03:37:47.464225 | orchestrator | 2026-03-29 03:37:47.464230 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-29 03:37:47.464237 | orchestrator | Sunday 29 March 2026 03:36:57 +0000 (0:00:00.071) 0:04:01.759 ********** 2026-03-29 03:37:47.464242 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464248 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:37:47.464253 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:37:47.464259 | orchestrator | 2026-03-29 03:37:47.464265 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-29 03:37:47.464271 | orchestrator | Sunday 29 March 2026 03:37:14 +0000 (0:00:16.883) 0:04:18.643 ********** 2026-03-29 03:37:47.464276 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464282 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:37:47.464288 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:37:47.464294 | orchestrator | 2026-03-29 03:37:47.464300 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-29 03:37:47.464306 | orchestrator | Sunday 29 March 2026 03:37:25 +0000 (0:00:11.317) 0:04:29.961 ********** 2026-03-29 03:37:47.464312 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464318 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:37:47.464324 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:37:47.464330 | orchestrator | 2026-03-29 03:37:47.464336 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-29 03:37:47.464342 | orchestrator | Sunday 29 March 2026 03:37:36 +0000 (0:00:10.341) 0:04:40.302 ********** 2026-03-29 03:37:47.464348 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464355 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:37:47.464360 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:37:47.464388 | orchestrator | 2026-03-29 03:37:47.464395 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-29 03:37:47.464402 | orchestrator | Sunday 29 March 2026 03:37:41 +0000 (0:00:05.427) 0:04:45.730 ********** 2026-03-29 03:37:47.464409 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:37:47.464416 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:37:47.464423 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:37:47.464430 | orchestrator | 2026-03-29 03:37:47.464435 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:37:47.464441 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:37:47.464447 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:37:47.464452 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:37:47.464459 | orchestrator | 2026-03-29 03:37:47.464467 | orchestrator | 2026-03-29 03:37:47.464474 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:37:47.464482 | orchestrator | Sunday 29 March 2026 03:37:47 +0000 (0:00:05.801) 0:04:51.531 ********** 2026-03-29 03:37:47.464496 | orchestrator | =============================================================================== 2026-03-29 03:37:47.464502 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.46s 2026-03-29 03:37:47.464508 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.65s 2026-03-29 03:37:47.464514 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.88s 2026-03-29 03:37:47.464520 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.66s 2026-03-29 03:37:47.464526 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.18s 2026-03-29 03:37:47.464532 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.32s 2026-03-29 03:37:47.464538 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.37s 2026-03-29 03:37:47.464552 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.34s 2026-03-29 03:37:47.464559 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.57s 2026-03-29 03:37:47.464566 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.77s 2026-03-29 03:37:47.464572 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.67s 2026-03-29 03:37:47.464578 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.72s 2026-03-29 03:37:47.464586 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.80s 2026-03-29 03:37:47.464593 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.66s 2026-03-29 03:37:47.464610 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.43s 2026-03-29 03:37:47.818921 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.39s 2026-03-29 03:37:47.819047 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.22s 2026-03-29 03:37:47.819071 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.11s 2026-03-29 03:37:47.819088 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.07s 2026-03-29 03:37:47.819105 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.97s 2026-03-29 03:37:50.192953 | orchestrator | 2026-03-29 03:37:50 | INFO  | Task 3ac8d68a-6fab-46c8-b7c9-a4200ff89b83 (ceilometer) was prepared for execution. 2026-03-29 03:37:50.193061 | orchestrator | 2026-03-29 03:37:50 | INFO  | It takes a moment until task 3ac8d68a-6fab-46c8-b7c9-a4200ff89b83 (ceilometer) has been started and output is visible here. 2026-03-29 03:38:14.253690 | orchestrator | 2026-03-29 03:38:14.253771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:38:14.253780 | orchestrator | 2026-03-29 03:38:14.253785 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:38:14.253789 | orchestrator | Sunday 29 March 2026 03:37:54 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-29 03:38:14.253793 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:38:14.253798 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:38:14.253802 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:38:14.253806 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:38:14.253810 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:38:14.253814 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:38:14.253817 | orchestrator | 2026-03-29 03:38:14.253821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:38:14.253825 | orchestrator | Sunday 29 March 2026 03:37:55 +0000 (0:00:00.725) 0:00:01.012 ********** 2026-03-29 03:38:14.253829 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253834 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253838 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253841 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253845 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253849 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-29 03:38:14.253852 | orchestrator | 2026-03-29 03:38:14.253856 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-29 03:38:14.253860 | orchestrator | 2026-03-29 03:38:14.253864 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-29 03:38:14.253868 | orchestrator | Sunday 29 March 2026 03:37:55 +0000 (0:00:00.632) 0:00:01.645 ********** 2026-03-29 03:38:14.253872 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:38:14.253878 | orchestrator | 2026-03-29 03:38:14.253882 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-29 03:38:14.253901 | orchestrator | Sunday 29 March 2026 03:37:57 +0000 (0:00:01.219) 0:00:02.864 ********** 2026-03-29 03:38:14.253905 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:14.253909 | orchestrator | 2026-03-29 03:38:14.253913 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-29 03:38:14.253916 | orchestrator | Sunday 29 March 2026 03:37:57 +0000 (0:00:00.144) 0:00:03.009 ********** 2026-03-29 03:38:14.253920 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:14.253924 | orchestrator | 2026-03-29 03:38:14.253928 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-29 03:38:14.253931 | orchestrator | Sunday 29 March 2026 03:37:57 +0000 (0:00:00.142) 0:00:03.152 ********** 2026-03-29 03:38:14.253935 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:38:14.253939 | orchestrator | 2026-03-29 03:38:14.253943 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-29 03:38:14.253957 | orchestrator | Sunday 29 March 2026 03:38:01 +0000 (0:00:03.933) 0:00:07.085 ********** 2026-03-29 03:38:14.253961 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:38:14.253965 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-29 03:38:14.253968 | orchestrator | 2026-03-29 03:38:14.253972 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-29 03:38:14.253976 | orchestrator | Sunday 29 March 2026 03:38:05 +0000 (0:00:03.749) 0:00:10.835 ********** 2026-03-29 03:38:14.253980 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:38:14.253984 | orchestrator | 2026-03-29 03:38:14.253988 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-29 03:38:14.253991 | orchestrator | Sunday 29 March 2026 03:38:08 +0000 (0:00:03.308) 0:00:14.143 ********** 2026-03-29 03:38:14.253995 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-29 03:38:14.253999 | orchestrator | 2026-03-29 03:38:14.254003 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-29 03:38:14.254007 | orchestrator | Sunday 29 March 2026 03:38:12 +0000 (0:00:04.182) 0:00:18.326 ********** 2026-03-29 03:38:14.254011 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:14.254047 | orchestrator | 2026-03-29 03:38:14.254051 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-29 03:38:14.254055 | orchestrator | Sunday 29 March 2026 03:38:12 +0000 (0:00:00.153) 0:00:18.479 ********** 2026-03-29 03:38:14.254061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:14.254109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:14.254118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:18.903598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:18.903753 | orchestrator | 2026-03-29 03:38:18.903781 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-29 03:38:18.903816 | orchestrator | Sunday 29 March 2026 03:38:14 +0000 (0:00:01.592) 0:00:20.072 ********** 2026-03-29 03:38:18.903835 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:18.903853 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:38:18.903869 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:38:18.903886 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:38:18.903902 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:38:18.903917 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:38:18.903933 | orchestrator | 2026-03-29 03:38:18.903951 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-29 03:38:18.903967 | orchestrator | Sunday 29 March 2026 03:38:15 +0000 (0:00:01.544) 0:00:21.617 ********** 2026-03-29 03:38:18.903985 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:38:18.904002 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:38:18.904019 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:38:18.904035 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:38:18.904051 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:38:18.904068 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:38:18.904083 | orchestrator | 2026-03-29 03:38:18.904100 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-29 03:38:18.904118 | orchestrator | Sunday 29 March 2026 03:38:16 +0000 (0:00:00.599) 0:00:22.217 ********** 2026-03-29 03:38:18.904136 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:18.904153 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:18.904170 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:18.904186 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:18.904203 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:18.904220 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:18.904237 | orchestrator | 2026-03-29 03:38:18.904255 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-29 03:38:18.904272 | orchestrator | Sunday 29 March 2026 03:38:17 +0000 (0:00:00.754) 0:00:22.971 ********** 2026-03-29 03:38:18.904290 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:38:18.904307 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:38:18.904324 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:38:18.904343 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:38:18.904520 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:38:18.904540 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:38:18.904557 | orchestrator | 2026-03-29 03:38:18.904667 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-29 03:38:18.904689 | orchestrator | Sunday 29 March 2026 03:38:17 +0000 (0:00:00.612) 0:00:23.584 ********** 2026-03-29 03:38:18.904710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:18.904744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:18.904754 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:18.904790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:18.904801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:18.904811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:18.904821 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:18.904837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:18.904848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:18.904865 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:18.904875 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:18.904885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:18.904895 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:18.904913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.521951 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:23.522103 | orchestrator | 2026-03-29 03:38:23.522125 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-29 03:38:23.522180 | orchestrator | Sunday 29 March 2026 03:38:18 +0000 (0:00:01.135) 0:00:24.719 ********** 2026-03-29 03:38:23.522191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:23.522207 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:23.522224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:23.522253 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:23.522259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:23.522269 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:23.522290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522296 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:23.522302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522307 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:23.522315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:23.522327 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:23.522332 | orchestrator | 2026-03-29 03:38:23.522338 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-29 03:38:23.522406 | orchestrator | Sunday 29 March 2026 03:38:19 +0000 (0:00:00.814) 0:00:25.534 ********** 2026-03-29 03:38:23.522412 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:23.522418 | orchestrator | 2026-03-29 03:38:23.522423 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-29 03:38:23.522429 | orchestrator | Sunday 29 March 2026 03:38:20 +0000 (0:00:00.684) 0:00:26.219 ********** 2026-03-29 03:38:23.522434 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:38:23.522440 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:38:23.522445 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:38:23.522450 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:38:23.522455 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:38:23.522460 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:38:23.522465 | orchestrator | 2026-03-29 03:38:23.522470 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-29 03:38:23.522475 | orchestrator | Sunday 29 March 2026 03:38:21 +0000 (0:00:00.822) 0:00:27.041 ********** 2026-03-29 03:38:23.522480 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:38:23.522485 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:38:23.522490 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:38:23.522496 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:38:23.522502 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:38:23.522508 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:38:23.522514 | orchestrator | 2026-03-29 03:38:23.522520 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-29 03:38:23.522526 | orchestrator | Sunday 29 March 2026 03:38:22 +0000 (0:00:00.937) 0:00:27.979 ********** 2026-03-29 03:38:23.522532 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:23.522538 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:23.522544 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:23.522549 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:23.522555 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:23.522561 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:23.522567 | orchestrator | 2026-03-29 03:38:23.522573 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-29 03:38:23.522578 | orchestrator | Sunday 29 March 2026 03:38:22 +0000 (0:00:00.769) 0:00:28.749 ********** 2026-03-29 03:38:23.522584 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:23.522590 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:23.522596 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:23.522602 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:23.522608 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:23.522613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:23.522619 | orchestrator | 2026-03-29 03:38:28.569976 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-29 03:38:28.570110 | orchestrator | Sunday 29 March 2026 03:38:23 +0000 (0:00:00.596) 0:00:29.345 ********** 2026-03-29 03:38:28.570122 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:28.570130 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:38:28.570137 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:38:28.570144 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:38:28.570150 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:38:28.570194 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:38:28.570202 | orchestrator | 2026-03-29 03:38:28.570209 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-29 03:38:28.570216 | orchestrator | Sunday 29 March 2026 03:38:25 +0000 (0:00:01.601) 0:00:30.946 ********** 2026-03-29 03:38:28.570225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:28.570254 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:28.570260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:28.570274 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:28.570281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:28.570315 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:28.570327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570425 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:28.570446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570454 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:28.570460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:28.570466 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:28.570473 | orchestrator | 2026-03-29 03:38:28.570479 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-29 03:38:28.570487 | orchestrator | Sunday 29 March 2026 03:38:25 +0000 (0:00:00.790) 0:00:31.737 ********** 2026-03-29 03:38:28.570494 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:28.570501 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:28.570508 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:28.570515 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:28.570522 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:28.570529 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:28.570536 | orchestrator | 2026-03-29 03:38:28.570544 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-29 03:38:28.570551 | orchestrator | Sunday 29 March 2026 03:38:26 +0000 (0:00:00.770) 0:00:32.507 ********** 2026-03-29 03:38:28.570558 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:28.570565 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:38:28.570572 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:38:28.570579 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:38:28.570592 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:38:28.570600 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:38:28.570607 | orchestrator | 2026-03-29 03:38:28.570614 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-29 03:38:28.570621 | orchestrator | Sunday 29 March 2026 03:38:27 +0000 (0:00:01.296) 0:00:33.804 ********** 2026-03-29 03:38:28.570637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:34.327707 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:34.327720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:34.327750 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:34.327754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:34.327781 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:34.327786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327791 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:34.327807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327811 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:34.327818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.327822 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:34.327825 | orchestrator | 2026-03-29 03:38:34.327830 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-29 03:38:34.327835 | orchestrator | Sunday 29 March 2026 03:38:29 +0000 (0:00:01.211) 0:00:35.015 ********** 2026-03-29 03:38:34.327839 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:34.327843 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:34.327847 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:34.327850 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:34.327854 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:34.327858 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:34.327861 | orchestrator | 2026-03-29 03:38:34.327865 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-29 03:38:34.327869 | orchestrator | Sunday 29 March 2026 03:38:29 +0000 (0:00:00.793) 0:00:35.808 ********** 2026-03-29 03:38:34.327873 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:34.327877 | orchestrator | 2026-03-29 03:38:34.327881 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-29 03:38:34.327885 | orchestrator | Sunday 29 March 2026 03:38:30 +0000 (0:00:00.142) 0:00:35.951 ********** 2026-03-29 03:38:34.327892 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:34.327896 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:34.327900 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:34.327904 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:34.327907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:34.327911 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:34.327915 | orchestrator | 2026-03-29 03:38:34.327918 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-29 03:38:34.327922 | orchestrator | Sunday 29 March 2026 03:38:30 +0000 (0:00:00.607) 0:00:36.559 ********** 2026-03-29 03:38:34.327927 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:38:34.327932 | orchestrator | 2026-03-29 03:38:34.327936 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-29 03:38:34.327939 | orchestrator | Sunday 29 March 2026 03:38:32 +0000 (0:00:01.279) 0:00:37.839 ********** 2026-03-29 03:38:34.327943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.327952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.814994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.815098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.815110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.815135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:34.815144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:34.815152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:34.815174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:34.815183 | orchestrator | 2026-03-29 03:38:34.815192 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-29 03:38:34.815200 | orchestrator | Sunday 29 March 2026 03:38:34 +0000 (0:00:02.304) 0:00:40.144 ********** 2026-03-29 03:38:34.815213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.815227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:34.815236 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:34.815245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.815252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:34.815260 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:34.815268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:34.815281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:36.601604 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:36.601755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601804 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:36.601818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601829 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:36.601841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601852 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:36.601863 | orchestrator | 2026-03-29 03:38:36.601875 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-29 03:38:36.601887 | orchestrator | Sunday 29 March 2026 03:38:35 +0000 (0:00:00.815) 0:00:40.959 ********** 2026-03-29 03:38:36.601899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:36.601943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:36.601982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.601994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:36.602005 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:36.602078 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:36.602092 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:36.602104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.602118 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:36.602132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:36.602145 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:36.602169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:44.113482 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:44.113572 | orchestrator | 2026-03-29 03:38:44.113581 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-29 03:38:44.113590 | orchestrator | Sunday 29 March 2026 03:38:36 +0000 (0:00:01.458) 0:00:42.417 ********** 2026-03-29 03:38:44.113612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:44.113723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:44.113736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:44.113746 | orchestrator | 2026-03-29 03:38:44.113757 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-29 03:38:44.113767 | orchestrator | Sunday 29 March 2026 03:38:39 +0000 (0:00:02.638) 0:00:45.056 ********** 2026-03-29 03:38:44.113778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:44.113817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.404936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:53.405050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:53.405055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:53.405059 | orchestrator | 2026-03-29 03:38:53.405065 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-29 03:38:53.405079 | orchestrator | Sunday 29 March 2026 03:38:44 +0000 (0:00:04.878) 0:00:49.935 ********** 2026-03-29 03:38:53.405084 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:53.405089 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:38:53.405096 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:38:53.405100 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:38:53.405104 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:38:53.405108 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:38:53.405111 | orchestrator | 2026-03-29 03:38:53.405115 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-29 03:38:53.405119 | orchestrator | Sunday 29 March 2026 03:38:45 +0000 (0:00:01.496) 0:00:51.431 ********** 2026-03-29 03:38:53.405123 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:53.405127 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:53.405131 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:53.405134 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:53.405138 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:53.405142 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:53.405145 | orchestrator | 2026-03-29 03:38:53.405149 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-29 03:38:53.405153 | orchestrator | Sunday 29 March 2026 03:38:46 +0000 (0:00:00.579) 0:00:52.011 ********** 2026-03-29 03:38:53.405157 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:53.405161 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:53.405165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:53.405168 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:38:53.405172 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:38:53.405176 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:38:53.405179 | orchestrator | 2026-03-29 03:38:53.405183 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-29 03:38:53.405187 | orchestrator | Sunday 29 March 2026 03:38:47 +0000 (0:00:01.647) 0:00:53.658 ********** 2026-03-29 03:38:53.405191 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:53.405194 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:53.405198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:53.405202 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:38:53.405206 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:38:53.405209 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:38:53.405213 | orchestrator | 2026-03-29 03:38:53.405217 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-29 03:38:53.405221 | orchestrator | Sunday 29 March 2026 03:38:49 +0000 (0:00:01.445) 0:00:55.103 ********** 2026-03-29 03:38:53.405229 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:38:53.405232 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:38:53.405236 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:38:53.405250 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:38:53.405253 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:38:53.405262 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:38:53.405266 | orchestrator | 2026-03-29 03:38:53.405270 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-29 03:38:53.405274 | orchestrator | Sunday 29 March 2026 03:38:50 +0000 (0:00:01.589) 0:00:56.693 ********** 2026-03-29 03:38:53.405278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:53.405297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:54.336370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:54.336460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:38:54.336468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:54.336474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:54.336478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:38:54.336483 | orchestrator | 2026-03-29 03:38:54.336488 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-29 03:38:54.336493 | orchestrator | Sunday 29 March 2026 03:38:53 +0000 (0:00:02.531) 0:00:59.225 ********** 2026-03-29 03:38:54.336509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:54.336525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:54.336534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:54.336539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:54.336543 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:54.336548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:54.336552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:54.336556 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:54.336560 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:54.336567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:54.336571 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:54.336578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759575 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:57.759654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759662 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:57.759666 | orchestrator | 2026-03-29 03:38:57.759671 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-29 03:38:57.759676 | orchestrator | Sunday 29 March 2026 03:38:54 +0000 (0:00:00.933) 0:01:00.158 ********** 2026-03-29 03:38:57.759680 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:57.759684 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:57.759688 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:57.759692 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:57.759696 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:57.759700 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:57.759704 | orchestrator | 2026-03-29 03:38:57.759708 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-29 03:38:57.759712 | orchestrator | Sunday 29 March 2026 03:38:55 +0000 (0:00:00.813) 0:01:00.972 ********** 2026-03-29 03:38:57.759717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:57.759727 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:38:57.759744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:57.759769 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:38:57.759784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 03:38:57.759792 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:38:57.759796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759800 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:38:57.759804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759812 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:38:57.759819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-29 03:38:57.759823 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:38:57.759827 | orchestrator | 2026-03-29 03:38:57.759830 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-29 03:38:57.759834 | orchestrator | Sunday 29 March 2026 03:38:55 +0000 (0:00:00.852) 0:01:01.824 ********** 2026-03-29 03:38:57.759842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-29 03:39:27.786490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:39:27.786513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:39:27.786520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-29 03:39:27.786527 | orchestrator | 2026-03-29 03:39:27.786544 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-29 03:39:27.786552 | orchestrator | Sunday 29 March 2026 03:38:57 +0000 (0:00:01.754) 0:01:03.579 ********** 2026-03-29 03:39:27.786566 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:39:27.786574 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:39:27.786580 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:39:27.786586 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:39:27.786591 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:39:27.786597 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:39:27.786603 | orchestrator | 2026-03-29 03:39:27.786610 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-29 03:39:27.786625 | orchestrator | Sunday 29 March 2026 03:38:58 +0000 (0:00:00.626) 0:01:04.206 ********** 2026-03-29 03:39:27.786646 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:39:27.786652 | orchestrator | 2026-03-29 03:39:27.786658 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786665 | orchestrator | Sunday 29 March 2026 03:39:02 +0000 (0:00:04.336) 0:01:08.542 ********** 2026-03-29 03:39:27.786671 | orchestrator | 2026-03-29 03:39:27.786677 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786684 | orchestrator | Sunday 29 March 2026 03:39:02 +0000 (0:00:00.073) 0:01:08.616 ********** 2026-03-29 03:39:27.786690 | orchestrator | 2026-03-29 03:39:27.786697 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786703 | orchestrator | Sunday 29 March 2026 03:39:02 +0000 (0:00:00.075) 0:01:08.691 ********** 2026-03-29 03:39:27.786709 | orchestrator | 2026-03-29 03:39:27.786716 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786722 | orchestrator | Sunday 29 March 2026 03:39:03 +0000 (0:00:00.267) 0:01:08.959 ********** 2026-03-29 03:39:27.786728 | orchestrator | 2026-03-29 03:39:27.786735 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786739 | orchestrator | Sunday 29 March 2026 03:39:03 +0000 (0:00:00.067) 0:01:09.027 ********** 2026-03-29 03:39:27.786743 | orchestrator | 2026-03-29 03:39:27.786747 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-29 03:39:27.786750 | orchestrator | Sunday 29 March 2026 03:39:03 +0000 (0:00:00.068) 0:01:09.095 ********** 2026-03-29 03:39:27.786754 | orchestrator | 2026-03-29 03:39:27.786763 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-29 03:39:27.786767 | orchestrator | Sunday 29 March 2026 03:39:03 +0000 (0:00:00.071) 0:01:09.167 ********** 2026-03-29 03:39:27.786771 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:39:27.786774 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:39:27.786778 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:39:27.786782 | orchestrator | 2026-03-29 03:39:27.786786 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-29 03:39:27.786789 | orchestrator | Sunday 29 March 2026 03:39:13 +0000 (0:00:10.574) 0:01:19.742 ********** 2026-03-29 03:39:27.786793 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:39:27.786797 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:39:27.786800 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:39:27.786804 | orchestrator | 2026-03-29 03:39:27.786808 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-29 03:39:27.786812 | orchestrator | Sunday 29 March 2026 03:39:21 +0000 (0:00:07.560) 0:01:27.302 ********** 2026-03-29 03:39:27.786815 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:39:27.786819 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:39:27.786823 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:39:27.786826 | orchestrator | 2026-03-29 03:39:27.786830 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:39:27.786835 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 03:39:27.786841 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 03:39:27.786850 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 03:39:28.292349 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 03:39:28.292424 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 03:39:28.292429 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 03:39:28.292453 | orchestrator | 2026-03-29 03:39:28.292458 | orchestrator | 2026-03-29 03:39:28.292463 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:39:28.292468 | orchestrator | Sunday 29 March 2026 03:39:27 +0000 (0:00:06.300) 0:01:33.603 ********** 2026-03-29 03:39:28.292472 | orchestrator | =============================================================================== 2026-03-29 03:39:28.292476 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.57s 2026-03-29 03:39:28.292480 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.56s 2026-03-29 03:39:28.292484 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.30s 2026-03-29 03:39:28.292487 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.88s 2026-03-29 03:39:28.292491 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.34s 2026-03-29 03:39:28.292495 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.18s 2026-03-29 03:39:28.292499 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.93s 2026-03-29 03:39:28.292503 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.75s 2026-03-29 03:39:28.292506 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.31s 2026-03-29 03:39:28.292510 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.64s 2026-03-29 03:39:28.292514 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.53s 2026-03-29 03:39:28.292517 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.30s 2026-03-29 03:39:28.292521 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.76s 2026-03-29 03:39:28.292525 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.65s 2026-03-29 03:39:28.292529 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.60s 2026-03-29 03:39:28.292533 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.59s 2026-03-29 03:39:28.292537 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.59s 2026-03-29 03:39:28.292540 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.54s 2026-03-29 03:39:28.292544 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.50s 2026-03-29 03:39:28.292548 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.46s 2026-03-29 03:39:30.493259 | orchestrator | 2026-03-29 03:39:30 | INFO  | Task eb910f9e-9a81-4ff4-9565-050fd8c34d97 (aodh) was prepared for execution. 2026-03-29 03:39:30.493393 | orchestrator | 2026-03-29 03:39:30 | INFO  | It takes a moment until task eb910f9e-9a81-4ff4-9565-050fd8c34d97 (aodh) has been started and output is visible here. 2026-03-29 03:40:02.767743 | orchestrator | 2026-03-29 03:40:02.767899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:40:02.767927 | orchestrator | 2026-03-29 03:40:02.767943 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:40:02.767960 | orchestrator | Sunday 29 March 2026 03:39:34 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-03-29 03:40:02.767976 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:40:02.767993 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:40:02.768007 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:40:02.768023 | orchestrator | 2026-03-29 03:40:02.768035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:40:02.768044 | orchestrator | Sunday 29 March 2026 03:39:34 +0000 (0:00:00.304) 0:00:00.547 ********** 2026-03-29 03:40:02.768053 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-29 03:40:02.768062 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-29 03:40:02.768097 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-29 03:40:02.768107 | orchestrator | 2026-03-29 03:40:02.768115 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-29 03:40:02.768124 | orchestrator | 2026-03-29 03:40:02.768133 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-29 03:40:02.768142 | orchestrator | Sunday 29 March 2026 03:39:34 +0000 (0:00:00.399) 0:00:00.947 ********** 2026-03-29 03:40:02.768150 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:40:02.768160 | orchestrator | 2026-03-29 03:40:02.768169 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-29 03:40:02.768178 | orchestrator | Sunday 29 March 2026 03:39:35 +0000 (0:00:00.497) 0:00:01.444 ********** 2026-03-29 03:40:02.768187 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-29 03:40:02.768196 | orchestrator | 2026-03-29 03:40:02.768204 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-29 03:40:02.768213 | orchestrator | Sunday 29 March 2026 03:39:39 +0000 (0:00:03.599) 0:00:05.044 ********** 2026-03-29 03:40:02.768223 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-29 03:40:02.768235 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-29 03:40:02.768245 | orchestrator | 2026-03-29 03:40:02.768255 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-29 03:40:02.768265 | orchestrator | Sunday 29 March 2026 03:39:45 +0000 (0:00:06.717) 0:00:11.762 ********** 2026-03-29 03:40:02.768275 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:40:02.768310 | orchestrator | 2026-03-29 03:40:02.768321 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-29 03:40:02.768331 | orchestrator | Sunday 29 March 2026 03:39:49 +0000 (0:00:03.532) 0:00:15.294 ********** 2026-03-29 03:40:02.768341 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:40:02.768352 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-29 03:40:02.768362 | orchestrator | 2026-03-29 03:40:02.768372 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-29 03:40:02.768382 | orchestrator | Sunday 29 March 2026 03:39:53 +0000 (0:00:03.997) 0:00:19.292 ********** 2026-03-29 03:40:02.768393 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:40:02.768403 | orchestrator | 2026-03-29 03:40:02.768413 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-29 03:40:02.768423 | orchestrator | Sunday 29 March 2026 03:39:56 +0000 (0:00:03.359) 0:00:22.652 ********** 2026-03-29 03:40:02.768433 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-29 03:40:02.768467 | orchestrator | 2026-03-29 03:40:02.768497 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-29 03:40:02.768518 | orchestrator | Sunday 29 March 2026 03:40:00 +0000 (0:00:03.905) 0:00:26.557 ********** 2026-03-29 03:40:02.768533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:02.768572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:02.768591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:02.768603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:02.768613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:02.768622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:02.768632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:02.768658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:03.951211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:03.951336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:03.951348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:03.951355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:03.951359 | orchestrator | 2026-03-29 03:40:03.951365 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-29 03:40:03.951370 | orchestrator | Sunday 29 March 2026 03:40:02 +0000 (0:00:02.169) 0:00:28.727 ********** 2026-03-29 03:40:03.951374 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:40:03.951379 | orchestrator | 2026-03-29 03:40:03.951382 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-29 03:40:03.951386 | orchestrator | Sunday 29 March 2026 03:40:02 +0000 (0:00:00.127) 0:00:28.854 ********** 2026-03-29 03:40:03.951390 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:40:03.951394 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:40:03.951398 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:40:03.951401 | orchestrator | 2026-03-29 03:40:03.951405 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-29 03:40:03.951426 | orchestrator | Sunday 29 March 2026 03:40:03 +0000 (0:00:00.453) 0:00:29.307 ********** 2026-03-29 03:40:03.951431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:03.951460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:03.951465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:03.951468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:03.951472 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:40:03.951477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:03.951480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:03.951489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:03.951501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:08.700794 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:40:08.700871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:08.700878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:08.700884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:08.700888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:08.700907 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:40:08.700911 | orchestrator | 2026-03-29 03:40:08.700916 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-29 03:40:08.700921 | orchestrator | Sunday 29 March 2026 03:40:03 +0000 (0:00:00.606) 0:00:29.913 ********** 2026-03-29 03:40:08.700925 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:40:08.700930 | orchestrator | 2026-03-29 03:40:08.700934 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-29 03:40:08.700938 | orchestrator | Sunday 29 March 2026 03:40:04 +0000 (0:00:00.651) 0:00:30.564 ********** 2026-03-29 03:40:08.700953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:08.700968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:08.700972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:08.700976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:08.700986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:08.700991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:08.700997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:08.701005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:09.377554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:09.377661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:09.377678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:09.377728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:09.377739 | orchestrator | 2026-03-29 03:40:09.377751 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-29 03:40:09.377763 | orchestrator | Sunday 29 March 2026 03:40:08 +0000 (0:00:04.098) 0:00:34.663 ********** 2026-03-29 03:40:09.377776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:09.377805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:09.377838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:09.377849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:09.377870 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:40:09.377885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:09.377895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:09.377905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:09.377923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:09.377932 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:40:09.377953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:10.376764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:10.376853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376865 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:40:10.376870 | orchestrator | 2026-03-29 03:40:10.376875 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-29 03:40:10.376880 | orchestrator | Sunday 29 March 2026 03:40:09 +0000 (0:00:00.675) 0:00:35.339 ********** 2026-03-29 03:40:10.376896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:10.376901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:10.376905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376948 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:40:10.376953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:10.376957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:10.376961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:10.376973 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:40:10.376981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 03:40:14.411199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 03:40:14.411320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 03:40:14.411329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 03:40:14.411335 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:40:14.411342 | orchestrator | 2026-03-29 03:40:14.411348 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-29 03:40:14.411355 | orchestrator | Sunday 29 March 2026 03:40:10 +0000 (0:00:00.997) 0:00:36.337 ********** 2026-03-29 03:40:14.411373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:14.411379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:14.411414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:14.411420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:14.411465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.673908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674052 | orchestrator | 2026-03-29 03:40:22.674062 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-29 03:40:22.674070 | orchestrator | Sunday 29 March 2026 03:40:14 +0000 (0:00:04.032) 0:00:40.369 ********** 2026-03-29 03:40:22.674078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:22.674098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:22.674131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:22.674151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:22.674213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026746 | orchestrator | 2026-03-29 03:40:28.026759 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-29 03:40:28.026769 | orchestrator | Sunday 29 March 2026 03:40:22 +0000 (0:00:08.264) 0:00:48.633 ********** 2026-03-29 03:40:28.026777 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:40:28.026787 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:40:28.026795 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:40:28.026803 | orchestrator | 2026-03-29 03:40:28.026811 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-29 03:40:28.026820 | orchestrator | Sunday 29 March 2026 03:40:24 +0000 (0:00:01.822) 0:00:50.455 ********** 2026-03-29 03:40:28.026845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:28.026875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:28.026885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 03:40:28.026907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:40:28.026992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:41:23.745471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-29 03:41:23.745581 | orchestrator | 2026-03-29 03:41:23.745595 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-29 03:41:23.745604 | orchestrator | Sunday 29 March 2026 03:40:28 +0000 (0:00:03.533) 0:00:53.988 ********** 2026-03-29 03:41:23.745611 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:41:23.745620 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:41:23.745627 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:41:23.745659 | orchestrator | 2026-03-29 03:41:23.745667 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-29 03:41:23.745674 | orchestrator | Sunday 29 March 2026 03:40:28 +0000 (0:00:00.298) 0:00:54.287 ********** 2026-03-29 03:41:23.745681 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745688 | orchestrator | 2026-03-29 03:41:23.745695 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-29 03:41:23.745702 | orchestrator | Sunday 29 March 2026 03:40:30 +0000 (0:00:02.297) 0:00:56.585 ********** 2026-03-29 03:41:23.745709 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745716 | orchestrator | 2026-03-29 03:41:23.745724 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-29 03:41:23.745731 | orchestrator | Sunday 29 March 2026 03:40:32 +0000 (0:00:02.379) 0:00:58.964 ********** 2026-03-29 03:41:23.745738 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745745 | orchestrator | 2026-03-29 03:41:23.745750 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-29 03:41:23.745755 | orchestrator | Sunday 29 March 2026 03:40:46 +0000 (0:00:13.427) 0:01:12.392 ********** 2026-03-29 03:41:23.745759 | orchestrator | 2026-03-29 03:41:23.745776 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-29 03:41:23.745783 | orchestrator | Sunday 29 March 2026 03:40:46 +0000 (0:00:00.090) 0:01:12.482 ********** 2026-03-29 03:41:23.745790 | orchestrator | 2026-03-29 03:41:23.745802 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-29 03:41:23.745809 | orchestrator | Sunday 29 March 2026 03:40:46 +0000 (0:00:00.071) 0:01:12.554 ********** 2026-03-29 03:41:23.745816 | orchestrator | 2026-03-29 03:41:23.745822 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-29 03:41:23.745829 | orchestrator | Sunday 29 March 2026 03:40:46 +0000 (0:00:00.273) 0:01:12.827 ********** 2026-03-29 03:41:23.745835 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745842 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:41:23.745848 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:41:23.745854 | orchestrator | 2026-03-29 03:41:23.745860 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-29 03:41:23.745866 | orchestrator | Sunday 29 March 2026 03:40:57 +0000 (0:00:10.537) 0:01:23.365 ********** 2026-03-29 03:41:23.745873 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745880 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:41:23.745887 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:41:23.745894 | orchestrator | 2026-03-29 03:41:23.745902 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-29 03:41:23.745909 | orchestrator | Sunday 29 March 2026 03:41:02 +0000 (0:00:05.228) 0:01:28.593 ********** 2026-03-29 03:41:23.745916 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745924 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:41:23.745941 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:41:23.745946 | orchestrator | 2026-03-29 03:41:23.745956 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-29 03:41:23.745960 | orchestrator | Sunday 29 March 2026 03:41:12 +0000 (0:00:10.200) 0:01:38.794 ********** 2026-03-29 03:41:23.745965 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:41:23.745969 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:41:23.745973 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:41:23.745978 | orchestrator | 2026-03-29 03:41:23.745982 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:41:23.745987 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:41:23.745993 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:41:23.745998 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:41:23.746010 | orchestrator | 2026-03-29 03:41:23.746042 | orchestrator | 2026-03-29 03:41:23.746049 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:41:23.746054 | orchestrator | Sunday 29 March 2026 03:41:23 +0000 (0:00:10.536) 0:01:49.331 ********** 2026-03-29 03:41:23.746059 | orchestrator | =============================================================================== 2026-03-29 03:41:23.746064 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.43s 2026-03-29 03:41:23.746070 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.54s 2026-03-29 03:41:23.746097 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.54s 2026-03-29 03:41:23.746107 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.20s 2026-03-29 03:41:23.746115 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.26s 2026-03-29 03:41:23.746122 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.72s 2026-03-29 03:41:23.746129 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.23s 2026-03-29 03:41:23.746136 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.10s 2026-03-29 03:41:23.746143 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.03s 2026-03-29 03:41:23.746150 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.00s 2026-03-29 03:41:23.746157 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.91s 2026-03-29 03:41:23.746164 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.60s 2026-03-29 03:41:23.746171 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.53s 2026-03-29 03:41:23.746178 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.53s 2026-03-29 03:41:23.746185 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.36s 2026-03-29 03:41:23.746192 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.38s 2026-03-29 03:41:23.746200 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.30s 2026-03-29 03:41:23.746207 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.17s 2026-03-29 03:41:23.746214 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.82s 2026-03-29 03:41:23.746221 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.00s 2026-03-29 03:41:26.166699 | orchestrator | 2026-03-29 03:41:26 | INFO  | Task 24877bee-565b-4137-b41d-3deda9d364ff (kolla-ceph-rgw) was prepared for execution. 2026-03-29 03:41:26.166801 | orchestrator | 2026-03-29 03:41:26 | INFO  | It takes a moment until task 24877bee-565b-4137-b41d-3deda9d364ff (kolla-ceph-rgw) has been started and output is visible here. 2026-03-29 03:42:01.661116 | orchestrator | 2026-03-29 03:42:01.661288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:42:01.661311 | orchestrator | 2026-03-29 03:42:01.661324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:42:01.661336 | orchestrator | Sunday 29 March 2026 03:41:30 +0000 (0:00:00.285) 0:00:00.285 ********** 2026-03-29 03:42:01.661348 | orchestrator | ok: [testbed-manager] 2026-03-29 03:42:01.661361 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:42:01.661372 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:42:01.661383 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:42:01.661394 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:42:01.661405 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:42:01.661416 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:42:01.661427 | orchestrator | 2026-03-29 03:42:01.661438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:42:01.661449 | orchestrator | Sunday 29 March 2026 03:41:31 +0000 (0:00:00.852) 0:00:01.138 ********** 2026-03-29 03:42:01.661484 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661496 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661508 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661519 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661530 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661540 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661551 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-29 03:42:01.661562 | orchestrator | 2026-03-29 03:42:01.661573 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 03:42:01.661594 | orchestrator | 2026-03-29 03:42:01.661613 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-29 03:42:01.661632 | orchestrator | Sunday 29 March 2026 03:41:32 +0000 (0:00:00.752) 0:00:01.891 ********** 2026-03-29 03:42:01.661652 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:42:01.661674 | orchestrator | 2026-03-29 03:42:01.661693 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-29 03:42:01.661714 | orchestrator | Sunday 29 March 2026 03:41:33 +0000 (0:00:01.591) 0:00:03.482 ********** 2026-03-29 03:42:01.661734 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-29 03:42:01.661754 | orchestrator | 2026-03-29 03:42:01.661774 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-29 03:42:01.661794 | orchestrator | Sunday 29 March 2026 03:41:37 +0000 (0:00:03.692) 0:00:07.175 ********** 2026-03-29 03:42:01.661815 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-29 03:42:01.661834 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-29 03:42:01.661846 | orchestrator | 2026-03-29 03:42:01.661857 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-29 03:42:01.661868 | orchestrator | Sunday 29 March 2026 03:41:43 +0000 (0:00:06.194) 0:00:13.369 ********** 2026-03-29 03:42:01.661878 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-29 03:42:01.661889 | orchestrator | 2026-03-29 03:42:01.661900 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-29 03:42:01.661911 | orchestrator | Sunday 29 March 2026 03:41:46 +0000 (0:00:03.140) 0:00:16.510 ********** 2026-03-29 03:42:01.661922 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:42:01.661933 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-29 03:42:01.661943 | orchestrator | 2026-03-29 03:42:01.661955 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-29 03:42:01.661965 | orchestrator | Sunday 29 March 2026 03:41:50 +0000 (0:00:03.872) 0:00:20.382 ********** 2026-03-29 03:42:01.661976 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-29 03:42:01.661987 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-29 03:42:01.661998 | orchestrator | 2026-03-29 03:42:01.662009 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-29 03:42:01.662081 | orchestrator | Sunday 29 March 2026 03:41:56 +0000 (0:00:05.938) 0:00:26.320 ********** 2026-03-29 03:42:01.662093 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-29 03:42:01.662104 | orchestrator | 2026-03-29 03:42:01.662115 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:42:01.662126 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662153 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662164 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662175 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662186 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662274 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662302 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:01.662319 | orchestrator | 2026-03-29 03:42:01.662337 | orchestrator | 2026-03-29 03:42:01.662355 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:42:01.662373 | orchestrator | Sunday 29 March 2026 03:42:01 +0000 (0:00:04.760) 0:00:31.081 ********** 2026-03-29 03:42:01.662391 | orchestrator | =============================================================================== 2026-03-29 03:42:01.662409 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.19s 2026-03-29 03:42:01.662427 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.94s 2026-03-29 03:42:01.662444 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.76s 2026-03-29 03:42:01.662460 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.87s 2026-03-29 03:42:01.662478 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.69s 2026-03-29 03:42:01.662496 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.14s 2026-03-29 03:42:01.662514 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.59s 2026-03-29 03:42:01.662532 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2026-03-29 03:42:01.662550 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-03-29 03:42:03.971271 | orchestrator | 2026-03-29 03:42:03 | INFO  | Task dfa60132-329a-4c36-b82e-91357d97bf74 (gnocchi) was prepared for execution. 2026-03-29 03:42:03.971349 | orchestrator | 2026-03-29 03:42:03 | INFO  | It takes a moment until task dfa60132-329a-4c36-b82e-91357d97bf74 (gnocchi) has been started and output is visible here. 2026-03-29 03:42:09.092940 | orchestrator | 2026-03-29 03:42:09.093084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:42:09.093116 | orchestrator | 2026-03-29 03:42:09.093139 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:42:09.093160 | orchestrator | Sunday 29 March 2026 03:42:08 +0000 (0:00:00.264) 0:00:00.264 ********** 2026-03-29 03:42:09.093178 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:42:09.093191 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:42:09.093273 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:42:09.093287 | orchestrator | 2026-03-29 03:42:09.093298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:42:09.093309 | orchestrator | Sunday 29 March 2026 03:42:08 +0000 (0:00:00.340) 0:00:00.605 ********** 2026-03-29 03:42:09.093321 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-29 03:42:09.093333 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-29 03:42:09.093344 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-29 03:42:09.093356 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-29 03:42:09.093366 | orchestrator | 2026-03-29 03:42:09.093378 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-29 03:42:09.093421 | orchestrator | skipping: no hosts matched 2026-03-29 03:42:09.093434 | orchestrator | 2026-03-29 03:42:09.093445 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:42:09.093457 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:09.093471 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:09.093484 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:42:09.093497 | orchestrator | 2026-03-29 03:42:09.093512 | orchestrator | 2026-03-29 03:42:09.093525 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:42:09.093539 | orchestrator | Sunday 29 March 2026 03:42:08 +0000 (0:00:00.352) 0:00:00.957 ********** 2026-03-29 03:42:09.093553 | orchestrator | =============================================================================== 2026-03-29 03:42:09.093566 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-03-29 03:42:09.093577 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-29 03:42:11.387079 | orchestrator | 2026-03-29 03:42:11 | INFO  | Task fd8303bb-1307-4eda-aff8-75229f1eb098 (manila) was prepared for execution. 2026-03-29 03:42:11.387281 | orchestrator | 2026-03-29 03:42:11 | INFO  | It takes a moment until task fd8303bb-1307-4eda-aff8-75229f1eb098 (manila) has been started and output is visible here. 2026-03-29 03:42:54.598538 | orchestrator | 2026-03-29 03:42:54.598651 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:42:54.598666 | orchestrator | 2026-03-29 03:42:54.598675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:42:54.598684 | orchestrator | Sunday 29 March 2026 03:42:15 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-29 03:42:54.598693 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:42:54.598704 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:42:54.598713 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:42:54.598722 | orchestrator | 2026-03-29 03:42:54.598731 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:42:54.598756 | orchestrator | Sunday 29 March 2026 03:42:15 +0000 (0:00:00.310) 0:00:00.570 ********** 2026-03-29 03:42:54.598765 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-29 03:42:54.598774 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-29 03:42:54.598783 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-29 03:42:54.598793 | orchestrator | 2026-03-29 03:42:54.598801 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-29 03:42:54.598810 | orchestrator | 2026-03-29 03:42:54.598818 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-29 03:42:54.598827 | orchestrator | Sunday 29 March 2026 03:42:16 +0000 (0:00:00.408) 0:00:00.979 ********** 2026-03-29 03:42:54.598837 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:42:54.598848 | orchestrator | 2026-03-29 03:42:54.598857 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-29 03:42:54.598866 | orchestrator | Sunday 29 March 2026 03:42:16 +0000 (0:00:00.548) 0:00:01.527 ********** 2026-03-29 03:42:54.598875 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:42:54.598886 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:42:54.598910 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:42:54.598920 | orchestrator | 2026-03-29 03:42:54.598937 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-29 03:42:54.598946 | orchestrator | Sunday 29 March 2026 03:42:17 +0000 (0:00:00.454) 0:00:01.982 ********** 2026-03-29 03:42:54.598955 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-29 03:42:54.598988 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-29 03:42:54.598998 | orchestrator | 2026-03-29 03:42:54.599007 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-29 03:42:54.599016 | orchestrator | Sunday 29 March 2026 03:42:24 +0000 (0:00:06.769) 0:00:08.751 ********** 2026-03-29 03:42:54.599026 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-29 03:42:54.599035 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-29 03:42:54.599043 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-29 03:42:54.599051 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-29 03:42:54.599059 | orchestrator | 2026-03-29 03:42:54.599066 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-29 03:42:54.599074 | orchestrator | Sunday 29 March 2026 03:42:37 +0000 (0:00:13.441) 0:00:22.193 ********** 2026-03-29 03:42:54.599082 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:42:54.599090 | orchestrator | 2026-03-29 03:42:54.599098 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-29 03:42:54.599105 | orchestrator | Sunday 29 March 2026 03:42:40 +0000 (0:00:03.400) 0:00:25.594 ********** 2026-03-29 03:42:54.599113 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:42:54.599121 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-29 03:42:54.599129 | orchestrator | 2026-03-29 03:42:54.599137 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-29 03:42:54.599146 | orchestrator | Sunday 29 March 2026 03:42:44 +0000 (0:00:04.063) 0:00:29.657 ********** 2026-03-29 03:42:54.599155 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:42:54.599163 | orchestrator | 2026-03-29 03:42:54.599256 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-29 03:42:54.599266 | orchestrator | Sunday 29 March 2026 03:42:48 +0000 (0:00:03.468) 0:00:33.125 ********** 2026-03-29 03:42:54.599275 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-29 03:42:54.599284 | orchestrator | 2026-03-29 03:42:54.599292 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-29 03:42:54.599300 | orchestrator | Sunday 29 March 2026 03:42:52 +0000 (0:00:03.903) 0:00:37.029 ********** 2026-03-29 03:42:54.599336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:42:54.599359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:42:54.599379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:42:54.599389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:42:54.599397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:42:54.599406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:42:54.599424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:05.164616 | orchestrator | 2026-03-29 03:43:05.164631 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-29 03:43:05.164645 | orchestrator | Sunday 29 March 2026 03:42:54 +0000 (0:00:02.357) 0:00:39.387 ********** 2026-03-29 03:43:05.164657 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:43:05.164668 | orchestrator | 2026-03-29 03:43:05.164680 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-29 03:43:05.164691 | orchestrator | Sunday 29 March 2026 03:42:55 +0000 (0:00:00.576) 0:00:39.963 ********** 2026-03-29 03:43:05.164704 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:43:05.164718 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:43:05.164731 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:43:05.164744 | orchestrator | 2026-03-29 03:43:05.164757 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-29 03:43:05.164770 | orchestrator | Sunday 29 March 2026 03:42:56 +0000 (0:00:00.983) 0:00:40.947 ********** 2026-03-29 03:43:05.164795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.164830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.164851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.164865 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.164878 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.164891 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.164903 | orchestrator | 2026-03-29 03:43:05.164916 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-29 03:43:05.164929 | orchestrator | Sunday 29 March 2026 03:42:57 +0000 (0:00:01.743) 0:00:42.690 ********** 2026-03-29 03:43:05.164942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.164955 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.164967 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.164980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.164993 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-29 03:43:05.165006 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-29 03:43:05.165018 | orchestrator | 2026-03-29 03:43:05.165032 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-29 03:43:05.165044 | orchestrator | Sunday 29 March 2026 03:42:59 +0000 (0:00:01.288) 0:00:43.979 ********** 2026-03-29 03:43:05.165058 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-29 03:43:05.165070 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-29 03:43:05.165082 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-29 03:43:05.165092 | orchestrator | 2026-03-29 03:43:05.165104 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-29 03:43:05.165114 | orchestrator | Sunday 29 March 2026 03:42:59 +0000 (0:00:00.693) 0:00:44.672 ********** 2026-03-29 03:43:05.165125 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:43:05.165136 | orchestrator | 2026-03-29 03:43:05.165147 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-29 03:43:05.165158 | orchestrator | Sunday 29 March 2026 03:43:00 +0000 (0:00:00.139) 0:00:44.812 ********** 2026-03-29 03:43:05.165193 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:43:05.165204 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:43:05.165225 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:43:05.165236 | orchestrator | 2026-03-29 03:43:05.165247 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-29 03:43:05.165258 | orchestrator | Sunday 29 March 2026 03:43:00 +0000 (0:00:00.525) 0:00:45.337 ********** 2026-03-29 03:43:05.165269 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:43:05.165280 | orchestrator | 2026-03-29 03:43:05.165291 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-29 03:43:05.165302 | orchestrator | Sunday 29 March 2026 03:43:01 +0000 (0:00:00.554) 0:00:45.892 ********** 2026-03-29 03:43:05.165322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:06.038685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:06.038766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:06.038776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:06.038880 | orchestrator | 2026-03-29 03:43:06.038888 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-29 03:43:06.038895 | orchestrator | Sunday 29 March 2026 03:43:05 +0000 (0:00:04.070) 0:00:49.963 ********** 2026-03-29 03:43:06.038911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:06.673084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673268 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:43:06.673280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:06.673294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673364 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:43:06.673378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:06.673387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:06.673417 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:43:06.673424 | orchestrator | 2026-03-29 03:43:06.673444 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-29 03:43:06.673461 | orchestrator | Sunday 29 March 2026 03:43:06 +0000 (0:00:00.876) 0:00:50.839 ********** 2026-03-29 03:43:06.673481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:11.301521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301635 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:43:11.301643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:11.301649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301693 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:43:11.301702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:11.301722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:11.301753 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:43:11.301761 | orchestrator | 2026-03-29 03:43:11.301771 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-29 03:43:11.301785 | orchestrator | Sunday 29 March 2026 03:43:06 +0000 (0:00:00.846) 0:00:51.686 ********** 2026-03-29 03:43:11.301802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:18.006364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:18.006485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:18.006499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:18.006632 | orchestrator | 2026-03-29 03:43:18.006640 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-29 03:43:18.006649 | orchestrator | Sunday 29 March 2026 03:43:11 +0000 (0:00:04.617) 0:00:56.304 ********** 2026-03-29 03:43:18.006662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:22.356887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:22.357036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:43:22.357057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:22.357119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:22.357256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:22.357294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:43:22.357379 | orchestrator | 2026-03-29 03:43:22.357404 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-29 03:43:22.357425 | orchestrator | Sunday 29 March 2026 03:43:18 +0000 (0:00:06.500) 0:01:02.804 ********** 2026-03-29 03:43:22.357446 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-29 03:43:22.357466 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-29 03:43:22.357485 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-29 03:43:22.357505 | orchestrator | 2026-03-29 03:43:22.357529 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-29 03:43:22.357550 | orchestrator | Sunday 29 March 2026 03:43:21 +0000 (0:00:03.708) 0:01:06.513 ********** 2026-03-29 03:43:22.357588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:25.621342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621458 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:43:25.621482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:25.621509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621548 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:43:25.621556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 03:43:25.621564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 03:43:25.621598 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:43:25.621605 | orchestrator | 2026-03-29 03:43:25.621614 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-29 03:43:25.621623 | orchestrator | Sunday 29 March 2026 03:43:22 +0000 (0:00:00.641) 0:01:07.154 ********** 2026-03-29 03:43:25.621638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:44:07.273759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:44:07.273859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 03:44:07.273904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-29 03:44:07.273998 | orchestrator | 2026-03-29 03:44:07.274007 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-29 03:44:07.274051 | orchestrator | Sunday 29 March 2026 03:43:25 +0000 (0:00:03.266) 0:01:10.421 ********** 2026-03-29 03:44:07.274059 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:07.274067 | orchestrator | 2026-03-29 03:44:07.274075 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-29 03:44:07.274082 | orchestrator | Sunday 29 March 2026 03:43:27 +0000 (0:00:02.204) 0:01:12.625 ********** 2026-03-29 03:44:07.274089 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:07.274096 | orchestrator | 2026-03-29 03:44:07.274103 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-29 03:44:07.274110 | orchestrator | Sunday 29 March 2026 03:43:30 +0000 (0:00:02.391) 0:01:15.017 ********** 2026-03-29 03:44:07.274157 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:07.274166 | orchestrator | 2026-03-29 03:44:07.274179 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-29 03:44:07.274186 | orchestrator | Sunday 29 March 2026 03:44:07 +0000 (0:00:36.715) 0:01:51.733 ********** 2026-03-29 03:44:07.274192 | orchestrator | 2026-03-29 03:44:07.274212 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-29 03:44:45.956056 | orchestrator | Sunday 29 March 2026 03:44:07 +0000 (0:00:00.077) 0:01:51.810 ********** 2026-03-29 03:44:45.956218 | orchestrator | 2026-03-29 03:44:45.956235 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-29 03:44:45.956245 | orchestrator | Sunday 29 March 2026 03:44:07 +0000 (0:00:00.080) 0:01:51.890 ********** 2026-03-29 03:44:45.956254 | orchestrator | 2026-03-29 03:44:45.956263 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-29 03:44:45.956272 | orchestrator | Sunday 29 March 2026 03:44:07 +0000 (0:00:00.073) 0:01:51.964 ********** 2026-03-29 03:44:45.956281 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:45.956318 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:44:45.956327 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:44:45.956335 | orchestrator | 2026-03-29 03:44:45.956344 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-29 03:44:45.956353 | orchestrator | Sunday 29 March 2026 03:44:17 +0000 (0:00:10.189) 0:02:02.154 ********** 2026-03-29 03:44:45.956362 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:45.956371 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:44:45.956379 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:44:45.956388 | orchestrator | 2026-03-29 03:44:45.956398 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-29 03:44:45.956407 | orchestrator | Sunday 29 March 2026 03:44:28 +0000 (0:00:10.776) 0:02:12.930 ********** 2026-03-29 03:44:45.956415 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:45.956424 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:44:45.956433 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:44:45.956438 | orchestrator | 2026-03-29 03:44:45.956443 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-29 03:44:45.956449 | orchestrator | Sunday 29 March 2026 03:44:33 +0000 (0:00:05.180) 0:02:18.111 ********** 2026-03-29 03:44:45.956454 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:44:45.956459 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:44:45.956464 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:44:45.956469 | orchestrator | 2026-03-29 03:44:45.956474 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:44:45.956481 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:44:45.956487 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:44:45.956492 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:44:45.956497 | orchestrator | 2026-03-29 03:44:45.956503 | orchestrator | 2026-03-29 03:44:45.956508 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:44:45.956526 | orchestrator | Sunday 29 March 2026 03:44:45 +0000 (0:00:12.122) 0:02:30.234 ********** 2026-03-29 03:44:45.956531 | orchestrator | =============================================================================== 2026-03-29 03:44:45.956539 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.72s 2026-03-29 03:44:45.956550 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.44s 2026-03-29 03:44:45.956563 | orchestrator | manila : Restart manila-share container -------------------------------- 12.12s 2026-03-29 03:44:45.956571 | orchestrator | manila : Restart manila-data container --------------------------------- 10.78s 2026-03-29 03:44:45.956579 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.19s 2026-03-29 03:44:45.956600 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.77s 2026-03-29 03:44:45.956616 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.50s 2026-03-29 03:44:45.956625 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.18s 2026-03-29 03:44:45.956633 | orchestrator | manila : Copying over config.json files for services -------------------- 4.62s 2026-03-29 03:44:45.956642 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.07s 2026-03-29 03:44:45.956650 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.06s 2026-03-29 03:44:45.956659 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.90s 2026-03-29 03:44:45.956669 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.71s 2026-03-29 03:44:45.956678 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.47s 2026-03-29 03:44:45.956699 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.40s 2026-03-29 03:44:45.956706 | orchestrator | manila : Check manila containers ---------------------------------------- 3.27s 2026-03-29 03:44:45.956712 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.39s 2026-03-29 03:44:45.956717 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.36s 2026-03-29 03:44:45.956723 | orchestrator | manila : Creating Manila database --------------------------------------- 2.20s 2026-03-29 03:44:45.956729 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.74s 2026-03-29 03:44:46.279433 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-29 03:44:58.419965 | orchestrator | 2026-03-29 03:44:58 | INFO  | Task f6d9b5be-42e5-4e0b-9cb4-50da90f6d610 (netdata) was prepared for execution. 2026-03-29 03:44:58.420059 | orchestrator | 2026-03-29 03:44:58 | INFO  | It takes a moment until task f6d9b5be-42e5-4e0b-9cb4-50da90f6d610 (netdata) has been started and output is visible here. 2026-03-29 03:46:36.301955 | orchestrator | 2026-03-29 03:46:36.302260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:46:36.302296 | orchestrator | 2026-03-29 03:46:36.302317 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:46:36.302334 | orchestrator | Sunday 29 March 2026 03:45:02 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-29 03:46:36.302346 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-29 03:46:36.302357 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-29 03:46:36.302368 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-29 03:46:36.302379 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-29 03:46:36.302390 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-29 03:46:36.302401 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-29 03:46:36.302411 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-29 03:46:36.302422 | orchestrator | 2026-03-29 03:46:36.302433 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-29 03:46:36.302444 | orchestrator | 2026-03-29 03:46:36.302455 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-29 03:46:36.302465 | orchestrator | Sunday 29 March 2026 03:45:03 +0000 (0:00:00.883) 0:00:01.113 ********** 2026-03-29 03:46:36.302479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:46:36.302504 | orchestrator | 2026-03-29 03:46:36.302524 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-29 03:46:36.302543 | orchestrator | Sunday 29 March 2026 03:45:05 +0000 (0:00:01.282) 0:00:02.396 ********** 2026-03-29 03:46:36.302563 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:36.302584 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:36.302602 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:36.302615 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:36.302627 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:36.302640 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:36.302652 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:36.302665 | orchestrator | 2026-03-29 03:46:36.302678 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-29 03:46:36.302691 | orchestrator | Sunday 29 March 2026 03:45:06 +0000 (0:00:01.846) 0:00:04.242 ********** 2026-03-29 03:46:36.302703 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:36.302716 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:36.302729 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:36.302741 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:36.302753 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:36.302795 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:36.302808 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:36.302820 | orchestrator | 2026-03-29 03:46:36.302849 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-29 03:46:36.302862 | orchestrator | Sunday 29 March 2026 03:45:09 +0000 (0:00:02.326) 0:00:06.569 ********** 2026-03-29 03:46:36.302875 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.302888 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:46:36.302898 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:46:36.302909 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:46:36.302919 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:46:36.302930 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:46:36.302940 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:46:36.302951 | orchestrator | 2026-03-29 03:46:36.302961 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-29 03:46:36.302972 | orchestrator | Sunday 29 March 2026 03:45:10 +0000 (0:00:01.550) 0:00:08.120 ********** 2026-03-29 03:46:36.302983 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.302993 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:46:36.303004 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:46:36.303014 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:46:36.303134 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:46:36.303149 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:46:36.303160 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:46:36.303171 | orchestrator | 2026-03-29 03:46:36.303182 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-29 03:46:36.303193 | orchestrator | Sunday 29 March 2026 03:45:29 +0000 (0:00:18.852) 0:00:26.972 ********** 2026-03-29 03:46:36.303203 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.303221 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:46:36.303239 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:46:36.303258 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:46:36.303275 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:46:36.303294 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:46:36.303311 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:46:36.303328 | orchestrator | 2026-03-29 03:46:36.303345 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-29 03:46:36.303364 | orchestrator | Sunday 29 March 2026 03:46:10 +0000 (0:00:40.570) 0:01:07.543 ********** 2026-03-29 03:46:36.303383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:46:36.303406 | orchestrator | 2026-03-29 03:46:36.303424 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-29 03:46:36.303445 | orchestrator | Sunday 29 March 2026 03:46:11 +0000 (0:00:01.441) 0:01:08.984 ********** 2026-03-29 03:46:36.303457 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-29 03:46:36.303468 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-29 03:46:36.303479 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-29 03:46:36.303490 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-29 03:46:36.303525 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-29 03:46:36.303537 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-29 03:46:36.303547 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-29 03:46:36.303558 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-29 03:46:36.303569 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-29 03:46:36.303579 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-29 03:46:36.303590 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-29 03:46:36.303600 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-29 03:46:36.303626 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-29 03:46:36.303636 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-29 03:46:36.303647 | orchestrator | 2026-03-29 03:46:36.303659 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-29 03:46:36.303670 | orchestrator | Sunday 29 March 2026 03:46:15 +0000 (0:00:03.442) 0:01:12.427 ********** 2026-03-29 03:46:36.303681 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:36.303692 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:36.303703 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:36.303714 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:36.303724 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:36.303735 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:36.303746 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:36.303756 | orchestrator | 2026-03-29 03:46:36.303767 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-29 03:46:36.303778 | orchestrator | Sunday 29 March 2026 03:46:16 +0000 (0:00:01.336) 0:01:13.763 ********** 2026-03-29 03:46:36.303789 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.303800 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:46:36.303811 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:46:36.303821 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:46:36.303832 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:46:36.303843 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:46:36.303853 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:46:36.303864 | orchestrator | 2026-03-29 03:46:36.303875 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-29 03:46:36.303885 | orchestrator | Sunday 29 March 2026 03:46:17 +0000 (0:00:01.394) 0:01:15.157 ********** 2026-03-29 03:46:36.303896 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:36.303907 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:36.303917 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:36.303928 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:36.303939 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:36.303950 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:36.303960 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:36.303971 | orchestrator | 2026-03-29 03:46:36.303982 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-29 03:46:36.303993 | orchestrator | Sunday 29 March 2026 03:46:19 +0000 (0:00:01.836) 0:01:16.994 ********** 2026-03-29 03:46:36.304003 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:36.304014 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:36.304053 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:36.304074 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:36.304085 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:36.304095 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:36.304106 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:36.304116 | orchestrator | 2026-03-29 03:46:36.304127 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-29 03:46:36.304138 | orchestrator | Sunday 29 March 2026 03:46:21 +0000 (0:00:01.825) 0:01:18.820 ********** 2026-03-29 03:46:36.304149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-29 03:46:36.304162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:46:36.304173 | orchestrator | 2026-03-29 03:46:36.304184 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-29 03:46:36.304195 | orchestrator | Sunday 29 March 2026 03:46:22 +0000 (0:00:01.370) 0:01:20.190 ********** 2026-03-29 03:46:36.304205 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.304216 | orchestrator | 2026-03-29 03:46:36.304227 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-29 03:46:36.304242 | orchestrator | Sunday 29 March 2026 03:46:25 +0000 (0:00:02.171) 0:01:22.361 ********** 2026-03-29 03:46:36.304271 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:46:36.304290 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:46:36.304310 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:46:36.304330 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:46:36.304348 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:46:36.304366 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:46:36.304379 | orchestrator | changed: [testbed-manager] 2026-03-29 03:46:36.304389 | orchestrator | 2026-03-29 03:46:36.304400 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:46:36.304411 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.304423 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.304434 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.304445 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.304466 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.738400 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.738477 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:46:36.738483 | orchestrator | 2026-03-29 03:46:36.738488 | orchestrator | 2026-03-29 03:46:36.738493 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:46:36.738498 | orchestrator | Sunday 29 March 2026 03:46:36 +0000 (0:00:11.246) 0:01:33.607 ********** 2026-03-29 03:46:36.738511 | orchestrator | =============================================================================== 2026-03-29 03:46:36.738515 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.57s 2026-03-29 03:46:36.738519 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.85s 2026-03-29 03:46:36.738522 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.25s 2026-03-29 03:46:36.738526 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.44s 2026-03-29 03:46:36.738530 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.33s 2026-03-29 03:46:36.738534 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.17s 2026-03-29 03:46:36.738537 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.85s 2026-03-29 03:46:36.738541 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.84s 2026-03-29 03:46:36.738545 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.83s 2026-03-29 03:46:36.738549 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.55s 2026-03-29 03:46:36.738552 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.44s 2026-03-29 03:46:36.738556 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.39s 2026-03-29 03:46:36.738560 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.37s 2026-03-29 03:46:36.738564 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.34s 2026-03-29 03:46:36.738569 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.28s 2026-03-29 03:46:36.738572 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-03-29 03:46:40.395282 | orchestrator | 2026-03-29 03:46:40 | INFO  | Task d710db5d-1d89-4328-9561-2cb439a2927a (prometheus) was prepared for execution. 2026-03-29 03:46:40.395417 | orchestrator | 2026-03-29 03:46:40 | INFO  | It takes a moment until task d710db5d-1d89-4328-9561-2cb439a2927a (prometheus) has been started and output is visible here. 2026-03-29 03:46:48.846794 | orchestrator | 2026-03-29 03:46:48.846903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:46:48.846917 | orchestrator | 2026-03-29 03:46:48.846924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:46:48.846932 | orchestrator | Sunday 29 March 2026 03:46:44 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-29 03:46:48.846939 | orchestrator | ok: [testbed-manager] 2026-03-29 03:46:48.846948 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:46:48.846955 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:46:48.846962 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:46:48.846968 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:46:48.846974 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:46:48.846981 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:46:48.846988 | orchestrator | 2026-03-29 03:46:48.846994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:46:48.847000 | orchestrator | Sunday 29 March 2026 03:46:44 +0000 (0:00:00.746) 0:00:01.005 ********** 2026-03-29 03:46:48.847009 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847106 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847118 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847124 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847131 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847137 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847143 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-29 03:46:48.847150 | orchestrator | 2026-03-29 03:46:48.847157 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-29 03:46:48.847164 | orchestrator | 2026-03-29 03:46:48.847170 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 03:46:48.847176 | orchestrator | Sunday 29 March 2026 03:46:45 +0000 (0:00:00.782) 0:00:01.788 ********** 2026-03-29 03:46:48.847185 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:46:48.847192 | orchestrator | 2026-03-29 03:46:48.847200 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-29 03:46:48.847206 | orchestrator | Sunday 29 March 2026 03:46:47 +0000 (0:00:01.229) 0:00:03.017 ********** 2026-03-29 03:46:48.847217 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 03:46:48.847228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:48.847335 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:48.847343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:48.847357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:48.847383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 03:46:50.048118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:50.048365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:50.048379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:55.055254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:55.055373 | orchestrator | 2026-03-29 03:46:55.055384 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 03:46:55.055390 | orchestrator | Sunday 29 March 2026 03:46:50 +0000 (0:00:03.024) 0:00:06.042 ********** 2026-03-29 03:46:55.055395 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 03:46:55.055401 | orchestrator | 2026-03-29 03:46:55.055406 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-29 03:46:55.055427 | orchestrator | Sunday 29 March 2026 03:46:51 +0000 (0:00:01.648) 0:00:07.691 ********** 2026-03-29 03:46:55.055433 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 03:46:55.055439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055496 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:46:55.055501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:55.055506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:55.055511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:55.055519 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:55.055524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:55.055531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565507 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 03:46:57.565514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:46:57.565582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:57.565601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:46:58.833319 | orchestrator | 2026-03-29 03:46:58.833406 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-29 03:46:58.833415 | orchestrator | Sunday 29 March 2026 03:46:57 +0000 (0:00:05.860) 0:00:13.551 ********** 2026-03-29 03:46:58.833426 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 03:46:58.833436 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:58.833444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:58.833499 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 03:46:58.833511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833542 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:46:58.833562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:58.833567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:58.833579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833583 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:46:58.833590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:58.833594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:58.833614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.065491 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:46:59.065502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.065511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.065533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.065542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.065574 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:46:59.065581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.065604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065618 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:46:59.065625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.065631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065654 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:46:59.065661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.065667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.065684 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:46:59.872048 | orchestrator | 2026-03-29 03:46:59.872135 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-29 03:46:59.872147 | orchestrator | Sunday 29 March 2026 03:46:59 +0000 (0:00:01.511) 0:00:15.062 ********** 2026-03-29 03:46:59.872157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.872168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.872256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.872281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.872337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:46:59.872358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 03:46:59.872380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:46:59.872393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:46:59.872414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 03:47:01.170288 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:47:01.170368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:47:01.170375 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:47:01.170380 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:47:01.170384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:47:01.170407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:47:01.170422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:47:01.170427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 03:47:01.170436 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:47:01.170440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:47:01.170455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170463 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:47:01.170467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:47:01.170478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170486 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:47:01.170490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 03:47:01.170494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 03:47:01.170501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 03:47:04.708836 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:47:04.708942 | orchestrator | 2026-03-29 03:47:04.708955 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-29 03:47:04.708970 | orchestrator | Sunday 29 March 2026 03:47:01 +0000 (0:00:02.088) 0:00:17.150 ********** 2026-03-29 03:47:04.708990 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 03:47:04.709105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709222 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:47:04.709239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:04.709247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:04.709259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:04.709267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:04.709276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:04.709283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:04.709298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.258790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.258906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.258940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.258955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.258969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.258980 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.258994 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 03:47:08.259127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.259144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.259162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:47:08.259174 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.259186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.259197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.259209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:47:08.259230 | orchestrator | 2026-03-29 03:47:08.259244 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-29 03:47:08.259257 | orchestrator | Sunday 29 March 2026 03:47:07 +0000 (0:00:06.160) 0:00:23.311 ********** 2026-03-29 03:47:08.259268 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 03:47:08.259280 | orchestrator | 2026-03-29 03:47:08.259292 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-29 03:47:08.259312 | orchestrator | Sunday 29 March 2026 03:47:08 +0000 (0:00:00.948) 0:00:24.260 ********** 2026-03-29 03:47:11.287188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287306 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287382 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287394 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:11.287406 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287461 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287486 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287503 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1110776, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287514 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287526 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287545 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287559 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:11.287590 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948170 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948275 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1110807, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8492577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:12.948313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948318 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948334 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948349 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948354 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948365 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948370 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:12.948385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221262 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221373 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221445 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221470 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221507 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1110770, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:14.221562 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221608 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221619 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221631 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221653 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:14.221677 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688365 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688481 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688488 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688494 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688516 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688536 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1110796, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8481903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:15.688549 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688555 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688568 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688574 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688583 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:15.688594 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.021926 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022076 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022089 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022093 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022115 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022163 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022167 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022171 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022179 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022187 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:17.022194 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.404927 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405031 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1110762, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8444319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:18.405042 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405048 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405065 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405086 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405108 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405113 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405118 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405123 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405132 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405145 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:18.405154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778644 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778753 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778766 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778797 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778822 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778841 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1110779, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8461156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:19.778868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778878 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778888 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:47:19.778899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778915 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778929 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778938 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778947 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:19.778962 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647544 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647660 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:47:26.647676 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647684 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647706 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647715 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647722 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:47:26.647726 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1110790, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.847653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:26.647744 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647749 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647759 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:47:26.647763 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647767 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:47:26.647774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 03:47:26.647779 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:47:26.647783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1110780, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8462915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:26.647788 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1110773, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8458412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:26.647792 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110805, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8487582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:26.647801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110760, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8441098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1110824, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1110804, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8484924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036373 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1110766, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8448076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036401 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1110761, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8442628, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036419 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1110786, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8468082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036437 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1110783, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8464615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036453 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1110823, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.850626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 03:47:52.036497 | orchestrator | 2026-03-29 03:47:52.036545 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-29 03:47:52.036561 | orchestrator | Sunday 29 March 2026 03:47:32 +0000 (0:00:24.414) 0:00:48.674 ********** 2026-03-29 03:47:52.036572 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 03:47:52.036583 | orchestrator | 2026-03-29 03:47:52.036593 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-29 03:47:52.036602 | orchestrator | Sunday 29 March 2026 03:47:33 +0000 (0:00:00.720) 0:00:49.395 ********** 2026-03-29 03:47:52.036612 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036623 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036634 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036653 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036663 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036683 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036693 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036702 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036714 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036737 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036760 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036771 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036783 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036794 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036806 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036817 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036829 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036859 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036882 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036893 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036912 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036931 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.036940 | orchestrator | [WARNING]: Skipped 2026-03-29 03:47:52.036950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036960 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-29 03:47:52.036969 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 03:47:52.036979 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-29 03:47:52.037071 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 03:47:52.037082 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:47:52.037092 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 03:47:52.037102 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 03:47:52.037111 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 03:47:52.037123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 03:47:52.037146 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 03:47:52.037166 | orchestrator | 2026-03-29 03:47:52.037181 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-29 03:47:52.037197 | orchestrator | Sunday 29 March 2026 03:47:35 +0000 (0:00:01.744) 0:00:51.139 ********** 2026-03-29 03:47:52.037213 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037232 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:47:52.037248 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037265 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:47:52.037276 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037286 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:47:52.037296 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037305 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:47:52.037315 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037324 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:47:52.037334 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 03:47:52.037343 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:47:52.037353 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-29 03:47:52.037362 | orchestrator | 2026-03-29 03:47:52.037375 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-29 03:48:09.526941 | orchestrator | Sunday 29 March 2026 03:47:52 +0000 (0:00:16.886) 0:01:08.026 ********** 2026-03-29 03:48:09.527091 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527104 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527113 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527119 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527125 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527132 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527138 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527144 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527151 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527158 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527165 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 03:48:09.527171 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527177 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-29 03:48:09.527184 | orchestrator | 2026-03-29 03:48:09.527192 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-29 03:48:09.527198 | orchestrator | Sunday 29 March 2026 03:47:55 +0000 (0:00:03.030) 0:01:11.057 ********** 2026-03-29 03:48:09.527204 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527237 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527243 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527250 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527270 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527277 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527283 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527288 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527294 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-29 03:48:09.527302 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527308 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527313 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 03:48:09.527319 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527325 | orchestrator | 2026-03-29 03:48:09.527331 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-29 03:48:09.527338 | orchestrator | Sunday 29 March 2026 03:47:57 +0000 (0:00:01.960) 0:01:13.018 ********** 2026-03-29 03:48:09.527344 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 03:48:09.527350 | orchestrator | 2026-03-29 03:48:09.527356 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-29 03:48:09.527363 | orchestrator | Sunday 29 March 2026 03:47:57 +0000 (0:00:00.794) 0:01:13.813 ********** 2026-03-29 03:48:09.527369 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:09.527376 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527382 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527387 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527393 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527398 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527404 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527410 | orchestrator | 2026-03-29 03:48:09.527416 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-29 03:48:09.527421 | orchestrator | Sunday 29 March 2026 03:47:58 +0000 (0:00:00.751) 0:01:14.564 ********** 2026-03-29 03:48:09.527427 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:09.527433 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527439 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527444 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527451 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:48:09.527457 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:48:09.527463 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:48:09.527469 | orchestrator | 2026-03-29 03:48:09.527474 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-29 03:48:09.527480 | orchestrator | Sunday 29 March 2026 03:48:00 +0000 (0:00:02.191) 0:01:16.756 ********** 2026-03-29 03:48:09.527486 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527493 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527499 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:09.527505 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527529 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527536 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527551 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527557 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527563 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527569 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527575 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527581 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527587 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 03:48:09.527593 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527599 | orchestrator | 2026-03-29 03:48:09.527605 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-29 03:48:09.527611 | orchestrator | Sunday 29 March 2026 03:48:02 +0000 (0:00:01.530) 0:01:18.287 ********** 2026-03-29 03:48:09.527619 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527625 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527631 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527637 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527643 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527649 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527657 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527666 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527674 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527680 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527686 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 03:48:09.527693 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527706 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-29 03:48:09.527713 | orchestrator | 2026-03-29 03:48:09.527719 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-29 03:48:09.527725 | orchestrator | Sunday 29 March 2026 03:48:03 +0000 (0:00:01.595) 0:01:19.883 ********** 2026-03-29 03:48:09.527731 | orchestrator | [WARNING]: Skipped 2026-03-29 03:48:09.527738 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-29 03:48:09.527745 | orchestrator | due to this access issue: 2026-03-29 03:48:09.527751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-29 03:48:09.527757 | orchestrator | not a directory 2026-03-29 03:48:09.527764 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 03:48:09.527770 | orchestrator | 2026-03-29 03:48:09.527776 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-29 03:48:09.527782 | orchestrator | Sunday 29 March 2026 03:48:05 +0000 (0:00:01.162) 0:01:21.045 ********** 2026-03-29 03:48:09.527789 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:09.527795 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527801 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527807 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527813 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527819 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527824 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527831 | orchestrator | 2026-03-29 03:48:09.527837 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-29 03:48:09.527845 | orchestrator | Sunday 29 March 2026 03:48:05 +0000 (0:00:00.953) 0:01:21.998 ********** 2026-03-29 03:48:09.527860 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:09.527867 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:48:09.527873 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:48:09.527879 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:48:09.527885 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:48:09.527891 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:48:09.527898 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:48:09.527904 | orchestrator | 2026-03-29 03:48:09.527909 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-29 03:48:09.527915 | orchestrator | Sunday 29 March 2026 03:48:06 +0000 (0:00:00.912) 0:01:22.911 ********** 2026-03-29 03:48:09.527925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:09.528023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211232 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 03:48:11.211330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211395 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:11.211411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 03:48:11.211436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:11.211442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:11.211447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:11.211457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:11.211465 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:11.211471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:11.211475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:11.211484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 03:48:13.296588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 03:48:13.296615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 03:48:13.296638 | orchestrator | 2026-03-29 03:48:13.296644 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-29 03:48:13.296649 | orchestrator | Sunday 29 March 2026 03:48:11 +0000 (0:00:04.301) 0:01:27.212 ********** 2026-03-29 03:48:13.296653 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 03:48:13.296657 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:48:13.296661 | orchestrator | 2026-03-29 03:48:13.296665 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296669 | orchestrator | Sunday 29 March 2026 03:48:12 +0000 (0:00:01.513) 0:01:28.726 ********** 2026-03-29 03:48:13.296673 | orchestrator | 2026-03-29 03:48:13.296677 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296680 | orchestrator | Sunday 29 March 2026 03:48:12 +0000 (0:00:00.080) 0:01:28.806 ********** 2026-03-29 03:48:13.296684 | orchestrator | 2026-03-29 03:48:13.296688 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296692 | orchestrator | Sunday 29 March 2026 03:48:12 +0000 (0:00:00.072) 0:01:28.879 ********** 2026-03-29 03:48:13.296695 | orchestrator | 2026-03-29 03:48:13.296699 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296703 | orchestrator | Sunday 29 March 2026 03:48:12 +0000 (0:00:00.071) 0:01:28.951 ********** 2026-03-29 03:48:13.296706 | orchestrator | 2026-03-29 03:48:13.296710 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296714 | orchestrator | Sunday 29 March 2026 03:48:13 +0000 (0:00:00.071) 0:01:29.022 ********** 2026-03-29 03:48:13.296718 | orchestrator | 2026-03-29 03:48:13.296721 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296725 | orchestrator | Sunday 29 March 2026 03:48:13 +0000 (0:00:00.068) 0:01:29.091 ********** 2026-03-29 03:48:13.296729 | orchestrator | 2026-03-29 03:48:13.296733 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 03:48:13.296739 | orchestrator | Sunday 29 March 2026 03:48:13 +0000 (0:00:00.082) 0:01:29.174 ********** 2026-03-29 03:49:54.183593 | orchestrator | 2026-03-29 03:49:54.183679 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-29 03:49:54.183688 | orchestrator | Sunday 29 March 2026 03:48:13 +0000 (0:00:00.097) 0:01:29.272 ********** 2026-03-29 03:49:54.183694 | orchestrator | changed: [testbed-manager] 2026-03-29 03:49:54.183700 | orchestrator | 2026-03-29 03:49:54.183706 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-29 03:49:54.183711 | orchestrator | Sunday 29 March 2026 03:48:34 +0000 (0:00:21.188) 0:01:50.460 ********** 2026-03-29 03:49:54.183733 | orchestrator | changed: [testbed-manager] 2026-03-29 03:49:54.183740 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:49:54.183745 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:49:54.183750 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:49:54.183755 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:49:54.183760 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:49:54.183765 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:49:54.183770 | orchestrator | 2026-03-29 03:49:54.183775 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-29 03:49:54.183780 | orchestrator | Sunday 29 March 2026 03:48:43 +0000 (0:00:08.643) 0:01:59.103 ********** 2026-03-29 03:49:54.183785 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:49:54.183790 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:49:54.183795 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:49:54.183800 | orchestrator | 2026-03-29 03:49:54.183805 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-29 03:49:54.183811 | orchestrator | Sunday 29 March 2026 03:48:53 +0000 (0:00:10.670) 0:02:09.774 ********** 2026-03-29 03:49:54.183828 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:49:54.183840 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:49:54.183846 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:49:54.183851 | orchestrator | 2026-03-29 03:49:54.183856 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-29 03:49:54.183861 | orchestrator | Sunday 29 March 2026 03:49:04 +0000 (0:00:10.703) 0:02:20.478 ********** 2026-03-29 03:49:54.183876 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:49:54.183881 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:49:54.183886 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:49:54.183891 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:49:54.183896 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:49:54.183901 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:49:54.183906 | orchestrator | changed: [testbed-manager] 2026-03-29 03:49:54.183911 | orchestrator | 2026-03-29 03:49:54.183916 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-29 03:49:54.183921 | orchestrator | Sunday 29 March 2026 03:49:18 +0000 (0:00:14.259) 0:02:34.738 ********** 2026-03-29 03:49:54.183971 | orchestrator | changed: [testbed-manager] 2026-03-29 03:49:54.183976 | orchestrator | 2026-03-29 03:49:54.183981 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-29 03:49:54.183986 | orchestrator | Sunday 29 March 2026 03:49:32 +0000 (0:00:13.622) 0:02:48.360 ********** 2026-03-29 03:49:54.183992 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:49:54.183997 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:49:54.184002 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:49:54.184007 | orchestrator | 2026-03-29 03:49:54.184011 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-29 03:49:54.184016 | orchestrator | Sunday 29 March 2026 03:49:37 +0000 (0:00:05.362) 0:02:53.723 ********** 2026-03-29 03:49:54.184021 | orchestrator | changed: [testbed-manager] 2026-03-29 03:49:54.184026 | orchestrator | 2026-03-29 03:49:54.184031 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-29 03:49:54.184036 | orchestrator | Sunday 29 March 2026 03:49:43 +0000 (0:00:05.811) 0:02:59.534 ********** 2026-03-29 03:49:54.184041 | orchestrator | changed: [testbed-node-5] 2026-03-29 03:49:54.184046 | orchestrator | changed: [testbed-node-4] 2026-03-29 03:49:54.184051 | orchestrator | changed: [testbed-node-3] 2026-03-29 03:49:54.184056 | orchestrator | 2026-03-29 03:49:54.184061 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:49:54.184068 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 03:49:54.184075 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 03:49:54.184086 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 03:49:54.184091 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 03:49:54.184096 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:49:54.184109 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:49:54.184114 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 03:49:54.184126 | orchestrator | 2026-03-29 03:49:54.184131 | orchestrator | 2026-03-29 03:49:54.184136 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:49:54.184141 | orchestrator | Sunday 29 March 2026 03:49:53 +0000 (0:00:10.158) 0:03:09.693 ********** 2026-03-29 03:49:54.184146 | orchestrator | =============================================================================== 2026-03-29 03:49:54.184152 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.41s 2026-03-29 03:49:54.184172 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.19s 2026-03-29 03:49:54.184181 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.89s 2026-03-29 03:49:54.184189 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.26s 2026-03-29 03:49:54.184198 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.62s 2026-03-29 03:49:54.184206 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.70s 2026-03-29 03:49:54.184214 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.67s 2026-03-29 03:49:54.184221 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.16s 2026-03-29 03:49:54.184228 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.64s 2026-03-29 03:49:54.184236 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.16s 2026-03-29 03:49:54.184245 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.86s 2026-03-29 03:49:54.184253 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.81s 2026-03-29 03:49:54.184261 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.36s 2026-03-29 03:49:54.184270 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.30s 2026-03-29 03:49:54.184278 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.03s 2026-03-29 03:49:54.184286 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.03s 2026-03-29 03:49:54.184294 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.19s 2026-03-29 03:49:54.184301 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.09s 2026-03-29 03:49:54.184314 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.96s 2026-03-29 03:49:54.184323 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.74s 2026-03-29 03:49:57.855133 | orchestrator | 2026-03-29 03:49:57 | INFO  | Task 60888ee2-a5b6-4c43-98b7-23cf1070a781 (grafana) was prepared for execution. 2026-03-29 03:49:57.855230 | orchestrator | 2026-03-29 03:49:57 | INFO  | It takes a moment until task 60888ee2-a5b6-4c43-98b7-23cf1070a781 (grafana) has been started and output is visible here. 2026-03-29 03:50:07.480147 | orchestrator | 2026-03-29 03:50:07.480248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:50:07.480270 | orchestrator | 2026-03-29 03:50:07.480311 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:50:07.480331 | orchestrator | Sunday 29 March 2026 03:50:02 +0000 (0:00:00.258) 0:00:00.258 ********** 2026-03-29 03:50:07.480349 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:50:07.480366 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:50:07.480384 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:50:07.480401 | orchestrator | 2026-03-29 03:50:07.480418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:50:07.480435 | orchestrator | Sunday 29 March 2026 03:50:02 +0000 (0:00:00.319) 0:00:00.578 ********** 2026-03-29 03:50:07.480453 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-29 03:50:07.480472 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-29 03:50:07.480488 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-29 03:50:07.480504 | orchestrator | 2026-03-29 03:50:07.480519 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-29 03:50:07.480536 | orchestrator | 2026-03-29 03:50:07.480549 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 03:50:07.480562 | orchestrator | Sunday 29 March 2026 03:50:02 +0000 (0:00:00.429) 0:00:01.007 ********** 2026-03-29 03:50:07.480577 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:50:07.480590 | orchestrator | 2026-03-29 03:50:07.480603 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-29 03:50:07.480617 | orchestrator | Sunday 29 March 2026 03:50:03 +0000 (0:00:00.564) 0:00:01.572 ********** 2026-03-29 03:50:07.480633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480677 | orchestrator | 2026-03-29 03:50:07.480691 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-29 03:50:07.480705 | orchestrator | Sunday 29 March 2026 03:50:04 +0000 (0:00:00.840) 0:00:02.412 ********** 2026-03-29 03:50:07.480727 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-29 03:50:07.480741 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-29 03:50:07.480768 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:50:07.480782 | orchestrator | 2026-03-29 03:50:07.480795 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 03:50:07.480808 | orchestrator | Sunday 29 March 2026 03:50:04 +0000 (0:00:00.822) 0:00:03.235 ********** 2026-03-29 03:50:07.480822 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:50:07.480834 | orchestrator | 2026-03-29 03:50:07.480848 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-29 03:50:07.480861 | orchestrator | Sunday 29 March 2026 03:50:05 +0000 (0:00:00.603) 0:00:03.838 ********** 2026-03-29 03:50:07.480891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:07.480965 | orchestrator | 2026-03-29 03:50:07.480977 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-29 03:50:07.480991 | orchestrator | Sunday 29 March 2026 03:50:06 +0000 (0:00:01.291) 0:00:05.130 ********** 2026-03-29 03:50:07.481004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:07.481018 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:50:07.481039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:07.481058 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:50:07.481080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:14.409975 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:50:14.410161 | orchestrator | 2026-03-29 03:50:14.410192 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-29 03:50:14.410213 | orchestrator | Sunday 29 March 2026 03:50:07 +0000 (0:00:00.577) 0:00:05.707 ********** 2026-03-29 03:50:14.410235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:14.410261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:14.410282 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:50:14.410302 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:50:14.410315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 03:50:14.410353 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:50:14.410365 | orchestrator | 2026-03-29 03:50:14.410376 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-29 03:50:14.410387 | orchestrator | Sunday 29 March 2026 03:50:08 +0000 (0:00:00.726) 0:00:06.434 ********** 2026-03-29 03:50:14.410398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410472 | orchestrator | 2026-03-29 03:50:14.410485 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-29 03:50:14.410497 | orchestrator | Sunday 29 March 2026 03:50:09 +0000 (0:00:01.317) 0:00:07.751 ********** 2026-03-29 03:50:14.410510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:50:14.410561 | orchestrator | 2026-03-29 03:50:14.410573 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-29 03:50:14.410585 | orchestrator | Sunday 29 March 2026 03:50:11 +0000 (0:00:01.574) 0:00:09.326 ********** 2026-03-29 03:50:14.410598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:50:14.410610 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:50:14.410620 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:50:14.410631 | orchestrator | 2026-03-29 03:50:14.410642 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-29 03:50:14.410652 | orchestrator | Sunday 29 March 2026 03:50:11 +0000 (0:00:00.311) 0:00:09.637 ********** 2026-03-29 03:50:14.410663 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 03:50:14.410674 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 03:50:14.410690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 03:50:14.410702 | orchestrator | 2026-03-29 03:50:14.410713 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-29 03:50:14.410723 | orchestrator | Sunday 29 March 2026 03:50:12 +0000 (0:00:01.246) 0:00:10.883 ********** 2026-03-29 03:50:14.410735 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 03:50:14.410746 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 03:50:14.410757 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 03:50:14.410768 | orchestrator | 2026-03-29 03:50:14.410779 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-29 03:50:14.410797 | orchestrator | Sunday 29 March 2026 03:50:14 +0000 (0:00:01.751) 0:00:12.635 ********** 2026-03-29 03:50:20.886714 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:50:20.886861 | orchestrator | 2026-03-29 03:50:20.886901 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-29 03:50:20.886962 | orchestrator | Sunday 29 March 2026 03:50:15 +0000 (0:00:00.735) 0:00:13.370 ********** 2026-03-29 03:50:20.886971 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-29 03:50:20.886980 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-29 03:50:20.886988 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:50:20.886996 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:50:20.887004 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:50:20.887011 | orchestrator | 2026-03-29 03:50:20.887019 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-29 03:50:20.887027 | orchestrator | Sunday 29 March 2026 03:50:15 +0000 (0:00:00.722) 0:00:14.093 ********** 2026-03-29 03:50:20.887034 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:50:20.887042 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:50:20.887049 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:50:20.887056 | orchestrator | 2026-03-29 03:50:20.887064 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-29 03:50:20.887097 | orchestrator | Sunday 29 March 2026 03:50:16 +0000 (0:00:00.371) 0:00:14.465 ********** 2026-03-29 03:50:20.887109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1110571, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1110571, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1110571, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1110616, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.818857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1110616, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.818857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1110616, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.818857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1110580, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1110580, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1110580, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1110617, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8202915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1110617, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8202915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:20.887261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1110617, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8202915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1110594, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1110594, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1110594, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1110608, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8175952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1110608, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8175952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1110608, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8175952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1110567, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8107789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1110567, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8107789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1110567, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8107789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1110576, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1110576, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1110576, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8121378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:24.907716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1110583, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1110583, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1110583, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.813669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1110600, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8160439, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1110600, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8160439, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1110600, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8160439, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1110613, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1110613, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1110613, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1110577, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8131082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1110577, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8131082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1110577, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8131082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1110606, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8166938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:28.879690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1110606, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8166938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1110606, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8166938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1110595, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1110595, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1110595, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.815829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1110593, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1110593, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1110593, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1110589, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1110589, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1110589, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.814706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1110601, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8164718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1110601, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8164718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:33.187375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1110601, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8164718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1110585, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8142722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1110585, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8142722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1110585, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8142722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1110612, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1110612, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1110612, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8176253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1110750, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8430479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1110750, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8430479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1110750, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8430479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1110648, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8266256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1110648, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8266256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.157995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1110648, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8266256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:37.158003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1110633, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8216453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1110633, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8216453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1110633, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8216453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1110681, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8298166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1110681, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8298166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1110681, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8298166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1110623, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8204982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1110623, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8204982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1110623, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8204982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1110718, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.835858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1110718, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.835858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1110718, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.835858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1110687, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8337219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:41.015803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1110687, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8337219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1110687, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8337219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1110725, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8366637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1110725, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8366637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1110725, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8366637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1110743, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8417788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1110743, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8417788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1110743, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8417788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1110715, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8348267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1110715, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8348267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1110715, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8348267, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1110673, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8287401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1110673, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8287401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:45.600663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1110673, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8287401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1110641, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8235466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1110641, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8235466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1110641, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8235466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1110668, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.82851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1110668, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.82851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1110668, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.82851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.309990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1110636, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8227997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1110636, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8227997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1110636, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8227997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1110678, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8291678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1110678, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8291678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1110678, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8291678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:49.310060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1110732, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.84118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1110732, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.84118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1110732, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.84118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1110729, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.839312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1110729, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.839312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1110729, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.839312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1110625, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8210292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1110625, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8210292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1110625, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8210292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1110630, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.821254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1110630, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.821254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1110630, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.821254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1110712, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.834185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:50:52.955598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1110712, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.834185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:52:31.731649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1110712, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.834185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:52:31.731764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1110727, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8376257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:52:31.731779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1110727, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8376257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:52:31.731790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1110727, 'dev': 121, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774748990.8376257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 03:52:31.731801 | orchestrator | 2026-03-29 03:52:31.731814 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-29 03:52:31.731941 | orchestrator | Sunday 29 March 2026 03:50:54 +0000 (0:00:37.974) 0:00:52.439 ********** 2026-03-29 03:52:31.731954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:52:31.731984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:52:31.732002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 03:52:31.732012 | orchestrator | 2026-03-29 03:52:31.732023 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-29 03:52:31.732033 | orchestrator | Sunday 29 March 2026 03:50:55 +0000 (0:00:01.084) 0:00:53.524 ********** 2026-03-29 03:52:31.732044 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:52:31.732055 | orchestrator | 2026-03-29 03:52:31.732065 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-29 03:52:31.732074 | orchestrator | Sunday 29 March 2026 03:50:57 +0000 (0:00:02.370) 0:00:55.894 ********** 2026-03-29 03:52:31.732084 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:52:31.732093 | orchestrator | 2026-03-29 03:52:31.732102 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 03:52:31.732112 | orchestrator | Sunday 29 March 2026 03:51:00 +0000 (0:00:02.438) 0:00:58.333 ********** 2026-03-29 03:52:31.732122 | orchestrator | 2026-03-29 03:52:31.732132 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 03:52:31.732142 | orchestrator | Sunday 29 March 2026 03:51:00 +0000 (0:00:00.072) 0:00:58.406 ********** 2026-03-29 03:52:31.732152 | orchestrator | 2026-03-29 03:52:31.732163 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 03:52:31.732173 | orchestrator | Sunday 29 March 2026 03:51:00 +0000 (0:00:00.082) 0:00:58.488 ********** 2026-03-29 03:52:31.732184 | orchestrator | 2026-03-29 03:52:31.732195 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-29 03:52:31.732205 | orchestrator | Sunday 29 March 2026 03:51:00 +0000 (0:00:00.081) 0:00:58.570 ********** 2026-03-29 03:52:31.732215 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:52:31.732226 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:52:31.732237 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:52:31.732259 | orchestrator | 2026-03-29 03:52:31.732269 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-29 03:52:31.732279 | orchestrator | Sunday 29 March 2026 03:51:02 +0000 (0:00:02.195) 0:01:00.765 ********** 2026-03-29 03:52:31.732289 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:52:31.732301 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:52:31.732311 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-29 03:52:31.732323 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-29 03:52:31.732334 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-29 03:52:31.732343 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-29 03:52:31.732350 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:52:31.732357 | orchestrator | 2026-03-29 03:52:31.732364 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-29 03:52:31.732371 | orchestrator | Sunday 29 March 2026 03:51:54 +0000 (0:00:51.676) 0:01:52.442 ********** 2026-03-29 03:52:31.732377 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:52:31.732384 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:52:31.732391 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:52:31.732397 | orchestrator | 2026-03-29 03:52:31.732404 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-29 03:52:31.732411 | orchestrator | Sunday 29 March 2026 03:52:26 +0000 (0:00:32.087) 0:02:24.530 ********** 2026-03-29 03:52:31.732417 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:52:31.732424 | orchestrator | 2026-03-29 03:52:31.732430 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-29 03:52:31.732437 | orchestrator | Sunday 29 March 2026 03:52:28 +0000 (0:00:02.371) 0:02:26.901 ********** 2026-03-29 03:52:31.732444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:52:31.732450 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:52:31.732457 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:52:31.732463 | orchestrator | 2026-03-29 03:52:31.732470 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-29 03:52:31.732476 | orchestrator | Sunday 29 March 2026 03:52:28 +0000 (0:00:00.309) 0:02:27.210 ********** 2026-03-29 03:52:31.732485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-29 03:52:31.732503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-29 03:52:32.364094 | orchestrator | 2026-03-29 03:52:32.364189 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-29 03:52:32.364201 | orchestrator | Sunday 29 March 2026 03:52:31 +0000 (0:00:02.743) 0:02:29.954 ********** 2026-03-29 03:52:32.364209 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:52:32.364217 | orchestrator | 2026-03-29 03:52:32.364225 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:52:32.364233 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:52:32.364260 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:52:32.364267 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 03:52:32.364293 | orchestrator | 2026-03-29 03:52:32.364299 | orchestrator | 2026-03-29 03:52:32.364306 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:52:32.364313 | orchestrator | Sunday 29 March 2026 03:52:32 +0000 (0:00:00.289) 0:02:30.243 ********** 2026-03-29 03:52:32.364320 | orchestrator | =============================================================================== 2026-03-29 03:52:32.364327 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.68s 2026-03-29 03:52:32.364333 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.97s 2026-03-29 03:52:32.364339 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.09s 2026-03-29 03:52:32.364346 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.74s 2026-03-29 03:52:32.364352 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.44s 2026-03-29 03:52:32.364358 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.37s 2026-03-29 03:52:32.364364 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2026-03-29 03:52:32.364371 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.20s 2026-03-29 03:52:32.364377 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.75s 2026-03-29 03:52:32.364384 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2026-03-29 03:52:32.364390 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2026-03-29 03:52:32.364397 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-03-29 03:52:32.364403 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2026-03-29 03:52:32.364410 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2026-03-29 03:52:32.364416 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.84s 2026-03-29 03:52:32.364423 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2026-03-29 03:52:32.364429 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-03-29 03:52:32.364435 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.73s 2026-03-29 03:52:32.364441 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2026-03-29 03:52:32.364447 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2026-03-29 03:52:32.668409 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-29 03:52:32.677646 | orchestrator | + set -e 2026-03-29 03:52:32.677734 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 03:52:32.678691 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 03:52:32.678762 | orchestrator | ++ INTERACTIVE=false 2026-03-29 03:52:32.678783 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 03:52:32.678801 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 03:52:32.678819 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 03:52:32.679723 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 03:52:32.679759 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 03:52:32.679773 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 03:52:32.679798 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 03:52:32.679817 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 03:52:32.679835 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 03:52:32.679912 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 03:52:32.679927 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 03:52:32.679937 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 03:52:32.679947 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 03:52:32.679957 | orchestrator | ++ export ARA=false 2026-03-29 03:52:32.679967 | orchestrator | ++ ARA=false 2026-03-29 03:52:32.679977 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 03:52:32.679987 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 03:52:32.679996 | orchestrator | ++ export TEMPEST=false 2026-03-29 03:52:32.680006 | orchestrator | ++ TEMPEST=false 2026-03-29 03:52:32.680016 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 03:52:32.680052 | orchestrator | ++ IS_ZUUL=true 2026-03-29 03:52:32.680070 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 03:52:32.680081 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 03:52:32.680091 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 03:52:32.680100 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 03:52:32.680110 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 03:52:32.680119 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 03:52:32.680129 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 03:52:32.680139 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 03:52:32.680149 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 03:52:32.680158 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 03:52:32.681052 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-29 03:52:32.734245 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 03:52:32.734346 | orchestrator | + osism apply clusterapi 2026-03-29 03:52:35.268601 | orchestrator | 2026-03-29 03:52:35 | INFO  | Task 5222ee29-f0de-48ce-96b3-ecc8cd0b754e (clusterapi) was prepared for execution. 2026-03-29 03:52:35.268669 | orchestrator | 2026-03-29 03:52:35 | INFO  | It takes a moment until task 5222ee29-f0de-48ce-96b3-ecc8cd0b754e (clusterapi) has been started and output is visible here. 2026-03-29 03:53:34.648334 | orchestrator | 2026-03-29 03:53:34.648413 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-29 03:53:34.648420 | orchestrator | 2026-03-29 03:53:34.648425 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-29 03:53:34.648430 | orchestrator | Sunday 29 March 2026 03:52:39 +0000 (0:00:00.188) 0:00:00.188 ********** 2026-03-29 03:53:34.648435 | orchestrator | included: cert_manager for testbed-manager 2026-03-29 03:53:34.648439 | orchestrator | 2026-03-29 03:53:34.648443 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-29 03:53:34.648447 | orchestrator | Sunday 29 March 2026 03:52:40 +0000 (0:00:00.249) 0:00:00.437 ********** 2026-03-29 03:53:34.648452 | orchestrator | changed: [testbed-manager] 2026-03-29 03:53:34.648456 | orchestrator | 2026-03-29 03:53:34.648460 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-29 03:53:34.648478 | orchestrator | Sunday 29 March 2026 03:52:46 +0000 (0:00:06.324) 0:00:06.761 ********** 2026-03-29 03:53:34.648482 | orchestrator | changed: [testbed-manager] 2026-03-29 03:53:34.648485 | orchestrator | 2026-03-29 03:53:34.648489 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-29 03:53:34.648493 | orchestrator | 2026-03-29 03:53:34.648497 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-29 03:53:34.648501 | orchestrator | Sunday 29 March 2026 03:53:13 +0000 (0:00:27.318) 0:00:34.080 ********** 2026-03-29 03:53:34.648505 | orchestrator | ok: [testbed-manager] 2026-03-29 03:53:34.648509 | orchestrator | 2026-03-29 03:53:34.648513 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-29 03:53:34.648517 | orchestrator | Sunday 29 March 2026 03:53:14 +0000 (0:00:01.195) 0:00:35.275 ********** 2026-03-29 03:53:34.648521 | orchestrator | ok: [testbed-manager] 2026-03-29 03:53:34.648524 | orchestrator | 2026-03-29 03:53:34.648528 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-29 03:53:34.648532 | orchestrator | Sunday 29 March 2026 03:53:15 +0000 (0:00:00.153) 0:00:35.429 ********** 2026-03-29 03:53:34.648536 | orchestrator | ok: [testbed-manager] 2026-03-29 03:53:34.648539 | orchestrator | 2026-03-29 03:53:34.648543 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-29 03:53:34.648547 | orchestrator | Sunday 29 March 2026 03:53:31 +0000 (0:00:16.850) 0:00:52.279 ********** 2026-03-29 03:53:34.648551 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:53:34.648554 | orchestrator | 2026-03-29 03:53:34.648558 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-29 03:53:34.648562 | orchestrator | Sunday 29 March 2026 03:53:31 +0000 (0:00:00.142) 0:00:52.422 ********** 2026-03-29 03:53:34.648566 | orchestrator | changed: [testbed-manager] 2026-03-29 03:53:34.648569 | orchestrator | 2026-03-29 03:53:34.648573 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:53:34.648596 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 03:53:34.648603 | orchestrator | 2026-03-29 03:53:34.648625 | orchestrator | 2026-03-29 03:53:34.648632 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:53:34.648645 | orchestrator | Sunday 29 March 2026 03:53:34 +0000 (0:00:02.309) 0:00:54.731 ********** 2026-03-29 03:53:34.648650 | orchestrator | =============================================================================== 2026-03-29 03:53:34.648656 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 27.32s 2026-03-29 03:53:34.648662 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.85s 2026-03-29 03:53:34.648667 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 6.32s 2026-03-29 03:53:34.648674 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.31s 2026-03-29 03:53:34.648680 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.20s 2026-03-29 03:53:34.648686 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-03-29 03:53:34.648692 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-03-29 03:53:34.648698 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-03-29 03:53:34.963725 | orchestrator | + osism apply magnum 2026-03-29 03:53:36.969123 | orchestrator | 2026-03-29 03:53:36 | INFO  | Task e08d3439-bcbc-48e2-a851-f5cf42ece5ac (magnum) was prepared for execution. 2026-03-29 03:53:36.969195 | orchestrator | 2026-03-29 03:53:36 | INFO  | It takes a moment until task e08d3439-bcbc-48e2-a851-f5cf42ece5ac (magnum) has been started and output is visible here. 2026-03-29 03:54:22.171210 | orchestrator | 2026-03-29 03:54:22.171337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 03:54:22.171354 | orchestrator | 2026-03-29 03:54:22.171367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 03:54:22.171380 | orchestrator | Sunday 29 March 2026 03:53:41 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-29 03:54:22.171392 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:54:22.171406 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:54:22.171419 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:54:22.171431 | orchestrator | 2026-03-29 03:54:22.171442 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 03:54:22.171454 | orchestrator | Sunday 29 March 2026 03:53:41 +0000 (0:00:00.332) 0:00:00.592 ********** 2026-03-29 03:54:22.171466 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-29 03:54:22.171478 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-29 03:54:22.171489 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-29 03:54:22.171501 | orchestrator | 2026-03-29 03:54:22.171513 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-29 03:54:22.171525 | orchestrator | 2026-03-29 03:54:22.171536 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 03:54:22.171549 | orchestrator | Sunday 29 March 2026 03:53:42 +0000 (0:00:00.447) 0:00:01.040 ********** 2026-03-29 03:54:22.171560 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:54:22.171574 | orchestrator | 2026-03-29 03:54:22.171586 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-29 03:54:22.171598 | orchestrator | Sunday 29 March 2026 03:53:42 +0000 (0:00:00.615) 0:00:01.655 ********** 2026-03-29 03:54:22.171612 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-29 03:54:22.171624 | orchestrator | 2026-03-29 03:54:22.171636 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-29 03:54:22.171649 | orchestrator | Sunday 29 March 2026 03:53:46 +0000 (0:00:04.057) 0:00:05.713 ********** 2026-03-29 03:54:22.171708 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-29 03:54:22.171719 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-29 03:54:22.171727 | orchestrator | 2026-03-29 03:54:22.171734 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-29 03:54:22.171741 | orchestrator | Sunday 29 March 2026 03:53:53 +0000 (0:00:06.946) 0:00:12.659 ********** 2026-03-29 03:54:22.171749 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 03:54:22.171757 | orchestrator | 2026-03-29 03:54:22.171766 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-29 03:54:22.171774 | orchestrator | Sunday 29 March 2026 03:53:57 +0000 (0:00:03.636) 0:00:16.296 ********** 2026-03-29 03:54:22.171782 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 03:54:22.171867 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-29 03:54:22.171877 | orchestrator | 2026-03-29 03:54:22.171886 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-29 03:54:22.171895 | orchestrator | Sunday 29 March 2026 03:54:01 +0000 (0:00:04.115) 0:00:20.411 ********** 2026-03-29 03:54:22.171903 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 03:54:22.171911 | orchestrator | 2026-03-29 03:54:22.171919 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-29 03:54:22.171928 | orchestrator | Sunday 29 March 2026 03:54:04 +0000 (0:00:03.480) 0:00:23.892 ********** 2026-03-29 03:54:22.171936 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-29 03:54:22.171944 | orchestrator | 2026-03-29 03:54:22.171951 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-29 03:54:22.171959 | orchestrator | Sunday 29 March 2026 03:54:08 +0000 (0:00:04.038) 0:00:27.931 ********** 2026-03-29 03:54:22.171966 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:54:22.171973 | orchestrator | 2026-03-29 03:54:22.171981 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-29 03:54:22.171988 | orchestrator | Sunday 29 March 2026 03:54:12 +0000 (0:00:03.484) 0:00:31.416 ********** 2026-03-29 03:54:22.171995 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:54:22.172002 | orchestrator | 2026-03-29 03:54:22.172009 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-29 03:54:22.172017 | orchestrator | Sunday 29 March 2026 03:54:16 +0000 (0:00:04.352) 0:00:35.768 ********** 2026-03-29 03:54:22.172024 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:54:22.172035 | orchestrator | 2026-03-29 03:54:22.172047 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-29 03:54:22.172065 | orchestrator | Sunday 29 March 2026 03:54:20 +0000 (0:00:03.753) 0:00:39.521 ********** 2026-03-29 03:54:22.172102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:22.172118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:22.172149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:22.172163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:22.172176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:22.172197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:29.846354 | orchestrator | 2026-03-29 03:54:29.846500 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-29 03:54:29.846525 | orchestrator | Sunday 29 March 2026 03:54:22 +0000 (0:00:01.661) 0:00:41.183 ********** 2026-03-29 03:54:29.846540 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:54:29.846555 | orchestrator | 2026-03-29 03:54:29.846569 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-29 03:54:29.846583 | orchestrator | Sunday 29 March 2026 03:54:22 +0000 (0:00:00.146) 0:00:41.330 ********** 2026-03-29 03:54:29.846598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:54:29.846612 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:54:29.846625 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:54:29.846638 | orchestrator | 2026-03-29 03:54:29.846652 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-29 03:54:29.846665 | orchestrator | Sunday 29 March 2026 03:54:22 +0000 (0:00:00.335) 0:00:41.666 ********** 2026-03-29 03:54:29.846678 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 03:54:29.846693 | orchestrator | 2026-03-29 03:54:29.846707 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-29 03:54:29.846721 | orchestrator | Sunday 29 March 2026 03:54:23 +0000 (0:00:00.864) 0:00:42.530 ********** 2026-03-29 03:54:29.846756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:29.846775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:29.846849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:29.846902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:29.846920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:29.846943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:29.846958 | orchestrator | 2026-03-29 03:54:29.846996 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-29 03:54:29.847012 | orchestrator | Sunday 29 March 2026 03:54:26 +0000 (0:00:02.501) 0:00:45.031 ********** 2026-03-29 03:54:29.847027 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:54:29.847042 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:54:29.847055 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:54:29.847069 | orchestrator | 2026-03-29 03:54:29.847082 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 03:54:29.847096 | orchestrator | Sunday 29 March 2026 03:54:26 +0000 (0:00:00.519) 0:00:45.551 ********** 2026-03-29 03:54:29.847111 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 03:54:29.847127 | orchestrator | 2026-03-29 03:54:29.847140 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-29 03:54:29.847155 | orchestrator | Sunday 29 March 2026 03:54:27 +0000 (0:00:00.598) 0:00:46.149 ********** 2026-03-29 03:54:29.847170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:29.847211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:30.762346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:30.762464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:30.762480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:30.762490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:30.762519 | orchestrator | 2026-03-29 03:54:30.762530 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-29 03:54:30.762541 | orchestrator | Sunday 29 March 2026 03:54:29 +0000 (0:00:02.719) 0:00:48.868 ********** 2026-03-29 03:54:30.762567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:30.762577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:30.762586 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:54:30.762602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:30.762612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:30.762627 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:54:30.762636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:30.762653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:34.443385 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:54:34.443459 | orchestrator | 2026-03-29 03:54:34.443465 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-29 03:54:34.443471 | orchestrator | Sunday 29 March 2026 03:54:30 +0000 (0:00:00.909) 0:00:49.778 ********** 2026-03-29 03:54:34.443488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:34.443495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:34.443501 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:54:34.443505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:34.443523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:34.443527 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:54:34.443542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:34.443550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:34.443554 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:54:34.443557 | orchestrator | 2026-03-29 03:54:34.443561 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-29 03:54:34.443566 | orchestrator | Sunday 29 March 2026 03:54:31 +0000 (0:00:00.906) 0:00:50.684 ********** 2026-03-29 03:54:34.443570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:34.443578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:34.443585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:40.522206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522356 | orchestrator | 2026-03-29 03:54:40.522365 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-29 03:54:40.522372 | orchestrator | Sunday 29 March 2026 03:54:34 +0000 (0:00:02.776) 0:00:53.461 ********** 2026-03-29 03:54:40.522378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:40.522394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:40.522403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:40.522407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:40.522424 | orchestrator | 2026-03-29 03:54:40.522438 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-29 03:54:40.522517 | orchestrator | Sunday 29 March 2026 03:54:39 +0000 (0:00:05.423) 0:00:58.884 ********** 2026-03-29 03:54:40.522536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:42.405575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:42.405710 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:54:42.405754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:42.405766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:42.405927 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:54:42.405944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 03:54:42.405966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 03:54:42.405972 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:54:42.405977 | orchestrator | 2026-03-29 03:54:42.405984 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-29 03:54:42.405990 | orchestrator | Sunday 29 March 2026 03:54:40 +0000 (0:00:00.661) 0:00:59.546 ********** 2026-03-29 03:54:42.406004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:42.406071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:42.406078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 03:54:42.406083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:54:42.406096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:55:34.208728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 03:55:34.209294 | orchestrator | 2026-03-29 03:55:34.209312 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 03:55:34.209318 | orchestrator | Sunday 29 March 2026 03:54:42 +0000 (0:00:01.877) 0:01:01.424 ********** 2026-03-29 03:55:34.209323 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:55:34.209330 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:55:34.209334 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:55:34.209339 | orchestrator | 2026-03-29 03:55:34.209343 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-29 03:55:34.209348 | orchestrator | Sunday 29 March 2026 03:54:42 +0000 (0:00:00.546) 0:01:01.971 ********** 2026-03-29 03:55:34.209352 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:55:34.209357 | orchestrator | 2026-03-29 03:55:34.209361 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-29 03:55:34.209366 | orchestrator | Sunday 29 March 2026 03:54:45 +0000 (0:00:02.389) 0:01:04.360 ********** 2026-03-29 03:55:34.209370 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:55:34.209375 | orchestrator | 2026-03-29 03:55:34.209379 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-29 03:55:34.209383 | orchestrator | Sunday 29 March 2026 03:54:47 +0000 (0:00:02.576) 0:01:06.937 ********** 2026-03-29 03:55:34.209388 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:55:34.209392 | orchestrator | 2026-03-29 03:55:34.209396 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 03:55:34.209401 | orchestrator | Sunday 29 March 2026 03:55:05 +0000 (0:00:17.179) 0:01:24.117 ********** 2026-03-29 03:55:34.209405 | orchestrator | 2026-03-29 03:55:34.209410 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 03:55:34.209414 | orchestrator | Sunday 29 March 2026 03:55:05 +0000 (0:00:00.116) 0:01:24.233 ********** 2026-03-29 03:55:34.209418 | orchestrator | 2026-03-29 03:55:34.209423 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 03:55:34.209427 | orchestrator | Sunday 29 March 2026 03:55:05 +0000 (0:00:00.083) 0:01:24.317 ********** 2026-03-29 03:55:34.209432 | orchestrator | 2026-03-29 03:55:34.209436 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-29 03:55:34.209440 | orchestrator | Sunday 29 March 2026 03:55:05 +0000 (0:00:00.075) 0:01:24.393 ********** 2026-03-29 03:55:34.209444 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:55:34.209449 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:55:34.209454 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:55:34.209458 | orchestrator | 2026-03-29 03:55:34.209462 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-29 03:55:34.209467 | orchestrator | Sunday 29 March 2026 03:55:23 +0000 (0:00:18.369) 0:01:42.762 ********** 2026-03-29 03:55:34.209471 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:55:34.209476 | orchestrator | changed: [testbed-node-2] 2026-03-29 03:55:34.209480 | orchestrator | changed: [testbed-node-1] 2026-03-29 03:55:34.209485 | orchestrator | 2026-03-29 03:55:34.209489 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:55:34.209494 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 03:55:34.209517 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:55:34.209522 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 03:55:34.209526 | orchestrator | 2026-03-29 03:55:34.209533 | orchestrator | 2026-03-29 03:55:34.209539 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:55:34.209548 | orchestrator | Sunday 29 March 2026 03:55:33 +0000 (0:00:10.117) 0:01:52.879 ********** 2026-03-29 03:55:34.209556 | orchestrator | =============================================================================== 2026-03-29 03:55:34.209563 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.37s 2026-03-29 03:55:34.209569 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.18s 2026-03-29 03:55:34.209577 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.12s 2026-03-29 03:55:34.209583 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.95s 2026-03-29 03:55:34.209589 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.42s 2026-03-29 03:55:34.209596 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.35s 2026-03-29 03:55:34.209602 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.12s 2026-03-29 03:55:34.209624 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.06s 2026-03-29 03:55:34.209638 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.04s 2026-03-29 03:55:34.209645 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.75s 2026-03-29 03:55:34.209651 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.64s 2026-03-29 03:55:34.209657 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.48s 2026-03-29 03:55:34.209663 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.48s 2026-03-29 03:55:34.209669 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.78s 2026-03-29 03:55:34.209675 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.72s 2026-03-29 03:55:34.209681 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.58s 2026-03-29 03:55:34.209687 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.50s 2026-03-29 03:55:34.209693 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2026-03-29 03:55:34.209699 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.88s 2026-03-29 03:55:34.209705 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.66s 2026-03-29 03:55:34.884072 | orchestrator | ok: Runtime: 1:42:25.023426 2026-03-29 03:55:35.124925 | 2026-03-29 03:55:35.125132 | TASK [Deploy in a nutshell] 2026-03-29 03:55:35.659905 | orchestrator | skipping: Conditional result was False 2026-03-29 03:55:35.682300 | 2026-03-29 03:55:35.682467 | TASK [Bootstrap services] 2026-03-29 03:55:36.431871 | orchestrator | 2026-03-29 03:55:36.432029 | orchestrator | # BOOTSTRAP 2026-03-29 03:55:36.432043 | orchestrator | 2026-03-29 03:55:36.432051 | orchestrator | + set -e 2026-03-29 03:55:36.432058 | orchestrator | + echo 2026-03-29 03:55:36.432067 | orchestrator | + echo '# BOOTSTRAP' 2026-03-29 03:55:36.432079 | orchestrator | + echo 2026-03-29 03:55:36.432109 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-29 03:55:36.440971 | orchestrator | + set -e 2026-03-29 03:55:36.441062 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-29 03:55:38.615874 | orchestrator | 2026-03-29 03:55:38 | INFO  | It takes a moment until task cca4f143-e56a-4283-80cb-b3b631cf82e8 (flavor-manager) has been started and output is visible here. 2026-03-29 03:55:46.492941 | orchestrator | 2026-03-29 03:55:41 | INFO  | Flavor SCS-1L-1 created 2026-03-29 03:55:46.493046 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1L-1-5 created 2026-03-29 03:55:46.493056 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1V-2 created 2026-03-29 03:55:46.493062 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1V-2-5 created 2026-03-29 03:55:46.493075 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1V-4 created 2026-03-29 03:55:46.493985 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1V-4-10 created 2026-03-29 03:55:46.494010 | orchestrator | 2026-03-29 03:55:42 | INFO  | Flavor SCS-1V-8 created 2026-03-29 03:55:46.494060 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-1V-8-20 created 2026-03-29 03:55:46.494080 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-2V-4 created 2026-03-29 03:55:46.494088 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-2V-4-10 created 2026-03-29 03:55:46.494095 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-2V-8 created 2026-03-29 03:55:46.494101 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-2V-8-20 created 2026-03-29 03:55:46.494109 | orchestrator | 2026-03-29 03:55:43 | INFO  | Flavor SCS-2V-16 created 2026-03-29 03:55:46.494116 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-2V-16-50 created 2026-03-29 03:55:46.494124 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-8 created 2026-03-29 03:55:46.494131 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-8-20 created 2026-03-29 03:55:46.494139 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-16 created 2026-03-29 03:55:46.494145 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-16-50 created 2026-03-29 03:55:46.494150 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-32 created 2026-03-29 03:55:46.494155 | orchestrator | 2026-03-29 03:55:44 | INFO  | Flavor SCS-4V-32-100 created 2026-03-29 03:55:46.494159 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-8V-16 created 2026-03-29 03:55:46.494164 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-8V-16-50 created 2026-03-29 03:55:46.494169 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-8V-32 created 2026-03-29 03:55:46.494173 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-8V-32-100 created 2026-03-29 03:55:46.494177 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-16V-32 created 2026-03-29 03:55:46.494182 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-16V-32-100 created 2026-03-29 03:55:46.494186 | orchestrator | 2026-03-29 03:55:45 | INFO  | Flavor SCS-2V-4-20s created 2026-03-29 03:55:46.494190 | orchestrator | 2026-03-29 03:55:46 | INFO  | Flavor SCS-4V-8-50s created 2026-03-29 03:55:46.494195 | orchestrator | 2026-03-29 03:55:46 | INFO  | Flavor SCS-8V-32-100s created 2026-03-29 03:55:48.901601 | orchestrator | 2026-03-29 03:55:48 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-29 03:55:59.042668 | orchestrator | 2026-03-29 03:55:59 | INFO  | Task f970ba90-e965-4a2f-bf1d-8498d763363b (bootstrap-basic) was prepared for execution. 2026-03-29 03:55:59.042795 | orchestrator | 2026-03-29 03:55:59 | INFO  | It takes a moment until task f970ba90-e965-4a2f-bf1d-8498d763363b (bootstrap-basic) has been started and output is visible here. 2026-03-29 03:56:41.476554 | orchestrator | 2026-03-29 03:56:41.476677 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-29 03:56:41.476695 | orchestrator | 2026-03-29 03:56:41.476785 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 03:56:41.476798 | orchestrator | Sunday 29 March 2026 03:56:03 +0000 (0:00:00.070) 0:00:00.070 ********** 2026-03-29 03:56:41.476810 | orchestrator | ok: [localhost] 2026-03-29 03:56:41.476821 | orchestrator | 2026-03-29 03:56:41.476832 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-29 03:56:41.476843 | orchestrator | Sunday 29 March 2026 03:56:05 +0000 (0:00:01.822) 0:00:01.892 ********** 2026-03-29 03:56:41.476854 | orchestrator | ok: [localhost] 2026-03-29 03:56:41.476865 | orchestrator | 2026-03-29 03:56:41.476876 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-29 03:56:41.476888 | orchestrator | Sunday 29 March 2026 03:56:12 +0000 (0:00:07.154) 0:00:09.047 ********** 2026-03-29 03:56:41.476909 | orchestrator | changed: [localhost] 2026-03-29 03:56:41.476929 | orchestrator | 2026-03-29 03:56:41.476949 | orchestrator | TASK [Create public network] *************************************************** 2026-03-29 03:56:41.476968 | orchestrator | Sunday 29 March 2026 03:56:18 +0000 (0:00:06.563) 0:00:15.611 ********** 2026-03-29 03:56:41.476985 | orchestrator | changed: [localhost] 2026-03-29 03:56:41.477003 | orchestrator | 2026-03-29 03:56:41.477022 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-29 03:56:41.477043 | orchestrator | Sunday 29 March 2026 03:56:24 +0000 (0:00:05.547) 0:00:21.159 ********** 2026-03-29 03:56:41.477067 | orchestrator | changed: [localhost] 2026-03-29 03:56:41.477088 | orchestrator | 2026-03-29 03:56:41.477110 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-29 03:56:41.477130 | orchestrator | Sunday 29 March 2026 03:56:30 +0000 (0:00:06.596) 0:00:27.756 ********** 2026-03-29 03:56:41.477149 | orchestrator | changed: [localhost] 2026-03-29 03:56:41.477168 | orchestrator | 2026-03-29 03:56:41.477188 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-29 03:56:41.477211 | orchestrator | Sunday 29 March 2026 03:56:34 +0000 (0:00:04.056) 0:00:31.813 ********** 2026-03-29 03:56:41.477231 | orchestrator | changed: [localhost] 2026-03-29 03:56:41.477247 | orchestrator | 2026-03-29 03:56:41.477261 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-29 03:56:41.477287 | orchestrator | Sunday 29 March 2026 03:56:38 +0000 (0:00:03.535) 0:00:35.348 ********** 2026-03-29 03:56:41.477300 | orchestrator | ok: [localhost] 2026-03-29 03:56:41.477312 | orchestrator | 2026-03-29 03:56:41.477325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:56:41.477338 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 03:56:41.477351 | orchestrator | 2026-03-29 03:56:41.477364 | orchestrator | 2026-03-29 03:56:41.477377 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:56:41.477390 | orchestrator | Sunday 29 March 2026 03:56:41 +0000 (0:00:02.785) 0:00:38.133 ********** 2026-03-29 03:56:41.477400 | orchestrator | =============================================================================== 2026-03-29 03:56:41.477411 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.15s 2026-03-29 03:56:41.477422 | orchestrator | Set public network to default ------------------------------------------- 6.60s 2026-03-29 03:56:41.477433 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.56s 2026-03-29 03:56:41.477444 | orchestrator | Create public network --------------------------------------------------- 5.55s 2026-03-29 03:56:41.477484 | orchestrator | Create public subnet ---------------------------------------------------- 4.06s 2026-03-29 03:56:41.477495 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.54s 2026-03-29 03:56:41.477506 | orchestrator | Create manager role ----------------------------------------------------- 2.79s 2026-03-29 03:56:41.477517 | orchestrator | Gathering Facts --------------------------------------------------------- 1.82s 2026-03-29 03:56:43.674339 | orchestrator | 2026-03-29 03:56:43 | INFO  | It takes a moment until task 24749d43-56f5-4a79-b057-fc9ea89a6d58 (image-manager) has been started and output is visible here. 2026-03-29 03:57:26.473312 | orchestrator | 2026-03-29 03:56:46 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-29 03:57:26.473440 | orchestrator | 2026-03-29 03:56:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-29 03:57:26.473457 | orchestrator | 2026-03-29 03:56:46 | INFO  | Importing image Cirros 0.6.2 2026-03-29 03:57:26.473469 | orchestrator | 2026-03-29 03:56:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 03:57:26.473481 | orchestrator | 2026-03-29 03:56:49 | INFO  | Waiting for image to leave queued state... 2026-03-29 03:57:26.473493 | orchestrator | 2026-03-29 03:56:51 | INFO  | Waiting for import to complete... 2026-03-29 03:57:26.473505 | orchestrator | 2026-03-29 03:57:01 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-29 03:57:26.473517 | orchestrator | 2026-03-29 03:57:01 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-29 03:57:26.473528 | orchestrator | 2026-03-29 03:57:01 | INFO  | Setting internal_version = 0.6.2 2026-03-29 03:57:26.473539 | orchestrator | 2026-03-29 03:57:01 | INFO  | Setting image_original_user = cirros 2026-03-29 03:57:26.473550 | orchestrator | 2026-03-29 03:57:01 | INFO  | Adding tag os:cirros 2026-03-29 03:57:26.473561 | orchestrator | 2026-03-29 03:57:02 | INFO  | Setting property architecture: x86_64 2026-03-29 03:57:26.473572 | orchestrator | 2026-03-29 03:57:02 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 03:57:26.473583 | orchestrator | 2026-03-29 03:57:02 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 03:57:26.473594 | orchestrator | 2026-03-29 03:57:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 03:57:26.473605 | orchestrator | 2026-03-29 03:57:03 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 03:57:26.473616 | orchestrator | 2026-03-29 03:57:03 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 03:57:26.473627 | orchestrator | 2026-03-29 03:57:03 | INFO  | Setting property os_distro: cirros 2026-03-29 03:57:26.473638 | orchestrator | 2026-03-29 03:57:03 | INFO  | Setting property os_purpose: minimal 2026-03-29 03:57:26.473648 | orchestrator | 2026-03-29 03:57:04 | INFO  | Setting property replace_frequency: never 2026-03-29 03:57:26.473663 | orchestrator | 2026-03-29 03:57:04 | INFO  | Setting property uuid_validity: none 2026-03-29 03:57:26.473717 | orchestrator | 2026-03-29 03:57:04 | INFO  | Setting property provided_until: none 2026-03-29 03:57:26.473734 | orchestrator | 2026-03-29 03:57:04 | INFO  | Setting property image_description: Cirros 2026-03-29 03:57:26.473751 | orchestrator | 2026-03-29 03:57:05 | INFO  | Setting property image_name: Cirros 2026-03-29 03:57:26.473768 | orchestrator | 2026-03-29 03:57:05 | INFO  | Setting property internal_version: 0.6.2 2026-03-29 03:57:26.473787 | orchestrator | 2026-03-29 03:57:05 | INFO  | Setting property image_original_user: cirros 2026-03-29 03:57:26.473840 | orchestrator | 2026-03-29 03:57:05 | INFO  | Setting property os_version: 0.6.2 2026-03-29 03:57:26.473874 | orchestrator | 2026-03-29 03:57:06 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 03:57:26.473897 | orchestrator | 2026-03-29 03:57:06 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-29 03:57:26.473916 | orchestrator | 2026-03-29 03:57:06 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-29 03:57:26.473928 | orchestrator | 2026-03-29 03:57:06 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-29 03:57:26.473941 | orchestrator | 2026-03-29 03:57:06 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-29 03:57:26.473953 | orchestrator | 2026-03-29 03:57:07 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-29 03:57:26.473970 | orchestrator | 2026-03-29 03:57:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-29 03:57:26.473982 | orchestrator | 2026-03-29 03:57:07 | INFO  | Importing image Cirros 0.6.3 2026-03-29 03:57:26.473995 | orchestrator | 2026-03-29 03:57:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 03:57:26.474008 | orchestrator | 2026-03-29 03:57:07 | INFO  | Waiting for image to leave queued state... 2026-03-29 03:57:26.474102 | orchestrator | 2026-03-29 03:57:09 | INFO  | Waiting for import to complete... 2026-03-29 03:57:26.474164 | orchestrator | 2026-03-29 03:57:20 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-29 03:57:26.474187 | orchestrator | 2026-03-29 03:57:20 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-29 03:57:26.474208 | orchestrator | 2026-03-29 03:57:20 | INFO  | Setting internal_version = 0.6.3 2026-03-29 03:57:26.474227 | orchestrator | 2026-03-29 03:57:20 | INFO  | Setting image_original_user = cirros 2026-03-29 03:57:26.474248 | orchestrator | 2026-03-29 03:57:20 | INFO  | Adding tag os:cirros 2026-03-29 03:57:26.474268 | orchestrator | 2026-03-29 03:57:20 | INFO  | Setting property architecture: x86_64 2026-03-29 03:57:26.474288 | orchestrator | 2026-03-29 03:57:20 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 03:57:26.474308 | orchestrator | 2026-03-29 03:57:21 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 03:57:26.474321 | orchestrator | 2026-03-29 03:57:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 03:57:26.474332 | orchestrator | 2026-03-29 03:57:21 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 03:57:26.474343 | orchestrator | 2026-03-29 03:57:21 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 03:57:26.474354 | orchestrator | 2026-03-29 03:57:22 | INFO  | Setting property os_distro: cirros 2026-03-29 03:57:26.474364 | orchestrator | 2026-03-29 03:57:22 | INFO  | Setting property os_purpose: minimal 2026-03-29 03:57:26.474376 | orchestrator | 2026-03-29 03:57:22 | INFO  | Setting property replace_frequency: never 2026-03-29 03:57:26.474395 | orchestrator | 2026-03-29 03:57:23 | INFO  | Setting property uuid_validity: none 2026-03-29 03:57:26.474414 | orchestrator | 2026-03-29 03:57:23 | INFO  | Setting property provided_until: none 2026-03-29 03:57:26.474432 | orchestrator | 2026-03-29 03:57:23 | INFO  | Setting property image_description: Cirros 2026-03-29 03:57:26.474450 | orchestrator | 2026-03-29 03:57:23 | INFO  | Setting property image_name: Cirros 2026-03-29 03:57:26.474467 | orchestrator | 2026-03-29 03:57:24 | INFO  | Setting property internal_version: 0.6.3 2026-03-29 03:57:26.474491 | orchestrator | 2026-03-29 03:57:24 | INFO  | Setting property image_original_user: cirros 2026-03-29 03:57:26.474502 | orchestrator | 2026-03-29 03:57:24 | INFO  | Setting property os_version: 0.6.3 2026-03-29 03:57:26.474513 | orchestrator | 2026-03-29 03:57:24 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 03:57:26.474524 | orchestrator | 2026-03-29 03:57:25 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-29 03:57:26.474534 | orchestrator | 2026-03-29 03:57:25 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-29 03:57:26.474545 | orchestrator | 2026-03-29 03:57:25 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-29 03:57:26.474556 | orchestrator | 2026-03-29 03:57:25 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-29 03:57:26.798479 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-29 03:57:29.293627 | orchestrator | 2026-03-29 03:57:29 | INFO  | date: 2026-03-29 2026-03-29 03:57:29.293770 | orchestrator | 2026-03-29 03:57:29 | INFO  | image: octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-29 03:57:29.293802 | orchestrator | 2026-03-29 03:57:29 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-29 03:57:29.293814 | orchestrator | 2026-03-29 03:57:29 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2.CHECKSUM 2026-03-29 03:57:29.479031 | orchestrator | 2026-03-29 03:57:29 | INFO  | checksum: 5272c69684e4fe71f33dea08bbea00caea18adf692daa1ba22f6b007101fb94b 2026-03-29 03:57:29.550010 | orchestrator | 2026-03-29 03:57:29 | INFO  | It takes a moment until task 69654b35-5f94-48c1-88cb-e7495bb11154 (image-manager) has been started and output is visible here. 2026-03-29 03:58:42.397799 | orchestrator | 2026-03-29 03:57:32 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-29' 2026-03-29 03:58:42.397900 | orchestrator | 2026-03-29 03:57:32 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2: 200 2026-03-29 03:58:42.397914 | orchestrator | 2026-03-29 03:57:32 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-29 2026-03-29 03:58:42.397923 | orchestrator | 2026-03-29 03:57:32 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-29 03:58:42.397933 | orchestrator | 2026-03-29 03:57:33 | INFO  | Waiting for image to leave queued state... 2026-03-29 03:58:42.397941 | orchestrator | 2026-03-29 03:57:35 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.397949 | orchestrator | 2026-03-29 03:57:45 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.397957 | orchestrator | 2026-03-29 03:57:55 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.397965 | orchestrator | 2026-03-29 03:58:06 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.397991 | orchestrator | 2026-03-29 03:58:16 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.397999 | orchestrator | 2026-03-29 03:58:26 | INFO  | Waiting for import to complete... 2026-03-29 03:58:42.398065 | orchestrator | 2026-03-29 03:58:36 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-29' successfully completed, reloading images 2026-03-29 03:58:42.398075 | orchestrator | 2026-03-29 03:58:37 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-29 03:58:42.398108 | orchestrator | 2026-03-29 03:58:37 | INFO  | Setting internal_version = 2026-03-29 2026-03-29 03:58:42.398117 | orchestrator | 2026-03-29 03:58:37 | INFO  | Setting image_original_user = ubuntu 2026-03-29 03:58:42.398125 | orchestrator | 2026-03-29 03:58:37 | INFO  | Adding tag amphora 2026-03-29 03:58:42.398133 | orchestrator | 2026-03-29 03:58:37 | INFO  | Adding tag os:ubuntu 2026-03-29 03:58:42.398141 | orchestrator | 2026-03-29 03:58:37 | INFO  | Setting property architecture: x86_64 2026-03-29 03:58:42.398149 | orchestrator | 2026-03-29 03:58:37 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 03:58:42.398156 | orchestrator | 2026-03-29 03:58:38 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 03:58:42.398164 | orchestrator | 2026-03-29 03:58:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 03:58:42.398172 | orchestrator | 2026-03-29 03:58:38 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 03:58:42.398181 | orchestrator | 2026-03-29 03:58:38 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 03:58:42.398194 | orchestrator | 2026-03-29 03:58:38 | INFO  | Setting property os_distro: ubuntu 2026-03-29 03:58:42.398207 | orchestrator | 2026-03-29 03:58:39 | INFO  | Setting property replace_frequency: quarterly 2026-03-29 03:58:42.398219 | orchestrator | 2026-03-29 03:58:39 | INFO  | Setting property uuid_validity: last-1 2026-03-29 03:58:42.398233 | orchestrator | 2026-03-29 03:58:39 | INFO  | Setting property provided_until: none 2026-03-29 03:58:42.398246 | orchestrator | 2026-03-29 03:58:39 | INFO  | Setting property os_purpose: network 2026-03-29 03:58:42.398274 | orchestrator | 2026-03-29 03:58:40 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-29 03:58:42.398289 | orchestrator | 2026-03-29 03:58:40 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-29 03:58:42.398302 | orchestrator | 2026-03-29 03:58:40 | INFO  | Setting property internal_version: 2026-03-29 2026-03-29 03:58:42.398315 | orchestrator | 2026-03-29 03:58:40 | INFO  | Setting property image_original_user: ubuntu 2026-03-29 03:58:42.398329 | orchestrator | 2026-03-29 03:58:41 | INFO  | Setting property os_version: 2026-03-29 2026-03-29 03:58:42.398342 | orchestrator | 2026-03-29 03:58:41 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260329.qcow2 2026-03-29 03:58:42.398352 | orchestrator | 2026-03-29 03:58:41 | INFO  | Setting property image_build_date: 2026-03-29 2026-03-29 03:58:42.398361 | orchestrator | 2026-03-29 03:58:41 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-29 03:58:42.398370 | orchestrator | 2026-03-29 03:58:41 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-29' 2026-03-29 03:58:42.398396 | orchestrator | 2026-03-29 03:58:42 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-29 03:58:42.398406 | orchestrator | 2026-03-29 03:58:42 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-29 03:58:42.398416 | orchestrator | 2026-03-29 03:58:42 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-29 03:58:42.398425 | orchestrator | 2026-03-29 03:58:42 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-29 03:58:42.862129 | orchestrator | ok: Runtime: 0:03:06.736006 2026-03-29 03:58:42.885276 | 2026-03-29 03:58:42.885463 | TASK [Run checks] 2026-03-29 03:58:43.665359 | orchestrator | + set -e 2026-03-29 03:58:43.665566 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 03:58:43.665593 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 03:58:43.665614 | orchestrator | ++ INTERACTIVE=false 2026-03-29 03:58:43.665628 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 03:58:43.665672 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 03:58:43.665687 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 03:58:43.665929 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 03:58:43.670593 | orchestrator | 2026-03-29 03:58:43.670726 | orchestrator | # CHECK 2026-03-29 03:58:43.670743 | orchestrator | 2026-03-29 03:58:43.670756 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 03:58:43.670774 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 03:58:43.670785 | orchestrator | + echo 2026-03-29 03:58:43.670797 | orchestrator | + echo '# CHECK' 2026-03-29 03:58:43.670808 | orchestrator | + echo 2026-03-29 03:58:43.670822 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 03:58:43.671185 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 03:58:43.722693 | orchestrator | 2026-03-29 03:58:43.722812 | orchestrator | ## Containers @ testbed-manager 2026-03-29 03:58:43.722826 | orchestrator | 2026-03-29 03:58:43.722838 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 03:58:43.722847 | orchestrator | + echo 2026-03-29 03:58:43.722858 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-29 03:58:43.722867 | orchestrator | + echo 2026-03-29 03:58:43.722875 | orchestrator | + osism container testbed-manager ps 2026-03-29 03:58:45.767904 | orchestrator | 2026-03-29 03:58:45 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-29 03:58:46.177942 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 03:58:46.178077 | orchestrator | b128df023a82 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-29 03:58:46.178093 | orchestrator | a1b415fbea16 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-29 03:58:46.178098 | orchestrator | 518597d11deb registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-29 03:58:46.178103 | orchestrator | 1784f983d1a9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-29 03:58:46.178107 | orchestrator | e50f095b5d91 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-29 03:58:46.178114 | orchestrator | f578fc81f834 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 58 minutes cephclient 2026-03-29 03:58:46.178773 | orchestrator | d8f6978103a9 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-29 03:58:46.179212 | orchestrator | 702b7ee5d1fd registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-29 03:58:46.179251 | orchestrator | e508195aac8d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-29 03:58:46.179256 | orchestrator | a670fccfe910 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-29 03:58:46.179260 | orchestrator | 91d522fb3f97 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-29 03:58:46.179264 | orchestrator | 5c160326f535 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-29 03:58:46.179269 | orchestrator | 05b5ab7881bc registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-29 03:58:46.179273 | orchestrator | eb4d9984631f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-29 03:58:46.179277 | orchestrator | ef769f3c5365 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-29 03:58:46.179289 | orchestrator | fa2491c360d2 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-29 03:58:46.179293 | orchestrator | e5953bbe9d07 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-29 03:58:46.179297 | orchestrator | 3f8870a99980 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-29 03:58:46.179301 | orchestrator | 6bd5cca4c35e registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-29 03:58:46.179305 | orchestrator | edcccf46e6b3 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-29 03:58:46.179309 | orchestrator | 1b8b5f79ae10 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-29 03:58:46.179312 | orchestrator | b8e8d4784bd7 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-29 03:58:46.179316 | orchestrator | d49b9c87d8e7 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-29 03:58:46.179333 | orchestrator | 23674f7b5be4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-29 03:58:46.179338 | orchestrator | d977b6cccacf registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-29 03:58:46.179342 | orchestrator | 3bb9b682baaa registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-29 03:58:46.179345 | orchestrator | 3d9b01c3292a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-29 03:58:46.179349 | orchestrator | 6f9d49b227aa registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-29 03:58:46.179353 | orchestrator | e3f31e928d7f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-29 03:58:46.179360 | orchestrator | 764655b7517e registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-29 03:58:46.521619 | orchestrator | 2026-03-29 03:58:46.521791 | orchestrator | ## Images @ testbed-manager 2026-03-29 03:58:46.521808 | orchestrator | 2026-03-29 03:58:46.521821 | orchestrator | + echo 2026-03-29 03:58:46.521833 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-29 03:58:46.521845 | orchestrator | + echo 2026-03-29 03:58:46.521861 | orchestrator | + osism container testbed-manager images 2026-03-29 03:58:48.903999 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 03:58:48.904088 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 79a5ae258a23 24 hours ago 239MB 2026-03-29 03:58:48.904095 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-03-29 03:58:48.904100 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-29 03:58:48.904104 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-29 03:58:48.904108 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 03:58:48.904112 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 03:58:48.904116 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 03:58:48.904122 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-29 03:58:48.904126 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 03:58:48.904150 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-29 03:58:48.904154 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-29 03:58:48.904158 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 03:58:48.904162 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-29 03:58:48.904165 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-29 03:58:48.904169 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-29 03:58:48.904173 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-29 03:58:48.904177 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-29 03:58:48.904180 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-29 03:58:48.904185 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-29 03:58:48.904188 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-29 03:58:48.904192 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-29 03:58:48.904196 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-29 03:58:48.904199 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-29 03:58:48.904203 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-29 03:58:48.904208 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-29 03:58:49.206860 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 03:58:49.207410 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 03:58:49.265156 | orchestrator | 2026-03-29 03:58:49.265229 | orchestrator | ## Containers @ testbed-node-0 2026-03-29 03:58:49.265239 | orchestrator | 2026-03-29 03:58:49.265246 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 03:58:49.265252 | orchestrator | + echo 2026-03-29 03:58:49.265259 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-29 03:58:49.265266 | orchestrator | + echo 2026-03-29 03:58:49.265273 | orchestrator | + osism container testbed-node-0 ps 2026-03-29 03:58:51.684914 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 03:58:51.685024 | orchestrator | 4f88b8e574ee registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-29 03:58:51.685065 | orchestrator | a19fa2d381b4 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-29 03:58:51.685951 | orchestrator | 6b8972b4dc7a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-29 03:58:51.686001 | orchestrator | c51e1ffa70d7 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-29 03:58:51.686060 | orchestrator | 08410f8e86a9 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-29 03:58:51.686070 | orchestrator | d1d17cc7890d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-29 03:58:51.686084 | orchestrator | a4e2b1dd6738 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-29 03:58:51.686092 | orchestrator | 95094a40b33f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-29 03:58:51.686100 | orchestrator | 98a83896ed24 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-29 03:58:51.686109 | orchestrator | 16c504eee729 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-29 03:58:51.686115 | orchestrator | aa4a0044358c registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-29 03:58:51.686121 | orchestrator | 52e14b9c40e8 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-29 03:58:51.686128 | orchestrator | e3cf5cfff072 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-29 03:58:51.686134 | orchestrator | 9b09af36b198 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-29 03:58:51.686141 | orchestrator | 9f45f297c3e8 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-29 03:58:51.686147 | orchestrator | 294b0b8cd72d registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-29 03:58:51.686153 | orchestrator | 44ecc7b8851d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-29 03:58:51.686159 | orchestrator | f02738f59b16 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-29 03:58:51.686166 | orchestrator | 81c53b1fb815 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-29 03:58:51.686179 | orchestrator | 086a2f953524 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-29 03:58:51.686187 | orchestrator | 0b42bfd5ecd2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-29 03:58:51.686194 | orchestrator | 7b304396dac7 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-29 03:58:51.686220 | orchestrator | 4860d72ab3ed registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-29 03:58:51.686228 | orchestrator | cb018af764d3 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-03-29 03:58:51.686235 | orchestrator | 89e3e7aa0420 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-03-29 03:58:51.686246 | orchestrator | 62e765c1a69f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-29 03:58:51.686254 | orchestrator | 2c8c82cab76b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-29 03:58:51.686260 | orchestrator | 5e44e9b745ff registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-29 03:58:51.686267 | orchestrator | 76574206dd77 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-29 03:58:51.686274 | orchestrator | 68af3539ef96 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-29 03:58:51.686282 | orchestrator | a57bd1fe047a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-03-29 03:58:51.686289 | orchestrator | c27abfdcd6bd registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-03-29 03:58:51.686296 | orchestrator | ae879a3f7b86 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-29 03:58:51.686303 | orchestrator | 080c857d542c registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-29 03:58:51.686310 | orchestrator | 11de7ad56b60 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-03-29 03:58:51.686317 | orchestrator | c88cb9fdeaf5 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-03-29 03:58:51.686324 | orchestrator | 0fe0877dbedf registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-29 03:58:51.686331 | orchestrator | d9ca174b814d registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-03-29 03:58:51.686968 | orchestrator | 66e2a87dd125 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-03-29 03:58:51.687009 | orchestrator | 581f0dc51d58 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-29 03:58:51.687032 | orchestrator | 4f7194618b9a registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-29 03:58:51.687040 | orchestrator | 29f6a883f47a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-29 03:58:51.687054 | orchestrator | 7727579e5042 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-03-29 03:58:51.687060 | orchestrator | c020a2cbf2f8 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-29 03:58:51.687066 | orchestrator | 39822aa73ed1 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-29 03:58:51.687072 | orchestrator | 9d79ff8c855b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-03-29 03:58:51.687078 | orchestrator | cb37eea3d8a6 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-03-29 03:58:51.687084 | orchestrator | a10f97748549 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-29 03:58:51.687091 | orchestrator | 2d0507ac923b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-03-29 03:58:51.687098 | orchestrator | 4dfe35ef28f7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-03-29 03:58:51.687105 | orchestrator | ede5da209ed8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-29 03:58:51.687111 | orchestrator | 76a3923fe123 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-29 03:58:51.687118 | orchestrator | e7f18aca942a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-29 03:58:51.687124 | orchestrator | 64e27a40b4da registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-29 03:58:51.687131 | orchestrator | c2d08445adcf registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-29 03:58:51.687137 | orchestrator | 0aeb2aa98343 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-29 03:58:51.687146 | orchestrator | 34ff030d24c8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-29 03:58:51.687153 | orchestrator | b2d1c071ff20 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-29 03:58:51.687163 | orchestrator | 1f032f50c4d6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-29 03:58:51.687184 | orchestrator | 59c368dcd9eb registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-29 03:58:51.687191 | orchestrator | 00ea17c9c771 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-29 03:58:51.687197 | orchestrator | dc12ed921674 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-29 03:58:51.687204 | orchestrator | dfa82e911d23 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-29 03:58:51.687211 | orchestrator | efb5e3667a5c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-29 03:58:51.687218 | orchestrator | f6ebd1311c70 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-29 03:58:51.687225 | orchestrator | 67ce875cb4ae registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-29 03:58:51.687231 | orchestrator | 38d3fa185e74 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-29 03:58:51.687236 | orchestrator | 7d86461cb3be registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-29 03:58:51.687244 | orchestrator | bbdb1ae1d7fe registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-29 03:58:51.687250 | orchestrator | f346329dfb6c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-29 03:58:51.687257 | orchestrator | c7533f148308 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-29 03:58:52.017274 | orchestrator | 2026-03-29 03:58:52.017386 | orchestrator | ## Images @ testbed-node-0 2026-03-29 03:58:52.017405 | orchestrator | 2026-03-29 03:58:52.017417 | orchestrator | + echo 2026-03-29 03:58:52.017428 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-29 03:58:52.017439 | orchestrator | + echo 2026-03-29 03:58:52.017450 | orchestrator | + osism container testbed-node-0 images 2026-03-29 03:58:54.374897 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 03:58:54.374990 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 03:58:54.374997 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 03:58:54.375003 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 03:58:54.375010 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 03:58:54.375029 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 03:58:54.375033 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 03:58:54.375036 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 03:58:54.375040 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 03:58:54.375044 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 03:58:54.375048 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 03:58:54.375051 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 03:58:54.375055 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 03:58:54.375066 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 03:58:54.375070 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 03:58:54.375073 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 03:58:54.375077 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 03:58:54.375081 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 03:58:54.375085 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 03:58:54.375089 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 03:58:54.375092 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 03:58:54.375096 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 03:58:54.375100 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 03:58:54.375104 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 03:58:54.375107 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 03:58:54.375111 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 03:58:54.375115 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 03:58:54.375119 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 03:58:54.375125 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-29 03:58:54.375129 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-29 03:58:54.375133 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 03:58:54.375140 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 03:58:54.375156 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-29 03:58:54.375161 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-29 03:58:54.375164 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-29 03:58:54.375168 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-29 03:58:54.375172 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-29 03:58:54.375176 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-29 03:58:54.375179 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-29 03:58:54.375183 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-29 03:58:54.375187 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 03:58:54.375191 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 03:58:54.375194 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 03:58:54.375198 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 03:58:54.375202 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 03:58:54.375206 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 03:58:54.375209 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 03:58:54.375214 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 03:58:54.375217 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 03:58:54.375221 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 03:58:54.375225 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 03:58:54.375229 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 03:58:54.375233 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 03:58:54.375236 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 03:58:54.375240 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 03:58:54.375244 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 03:58:54.375248 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 03:58:54.375254 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 03:58:54.375258 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 03:58:54.375273 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-29 03:58:54.375280 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-29 03:58:54.375285 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 03:58:54.375295 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 03:58:54.375311 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 03:58:54.375321 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 03:58:54.375327 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 03:58:54.375333 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 03:58:54.375338 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 03:58:54.375344 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 03:58:54.375350 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 03:58:54.702223 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 03:58:54.702529 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 03:58:54.766841 | orchestrator | 2026-03-29 03:58:54.766936 | orchestrator | ## Containers @ testbed-node-1 2026-03-29 03:58:54.766954 | orchestrator | 2026-03-29 03:58:54.766963 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 03:58:54.766972 | orchestrator | + echo 2026-03-29 03:58:54.766981 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-29 03:58:54.766990 | orchestrator | + echo 2026-03-29 03:58:54.766999 | orchestrator | + osism container testbed-node-1 ps 2026-03-29 03:58:57.200726 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 03:58:57.200823 | orchestrator | c9d9cb9ebb4c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-29 03:58:57.200836 | orchestrator | 7d228fc27aaa registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-29 03:58:57.200845 | orchestrator | 0d8e56a56bc3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-29 03:58:57.200853 | orchestrator | 67608270bee5 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-29 03:58:57.200863 | orchestrator | bfdbc3505388 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-29 03:58:57.200871 | orchestrator | f7dd3403e62b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-29 03:58:57.200901 | orchestrator | bc0761d92f7f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-29 03:58:57.200909 | orchestrator | 0d1761bf8393 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-29 03:58:57.200918 | orchestrator | c14bd8e7b0dc registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-29 03:58:57.200925 | orchestrator | 5225437d17a7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-29 03:58:57.200934 | orchestrator | 090b26c8e57b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-29 03:58:57.200942 | orchestrator | 2eb55a858cf9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-29 03:58:57.200966 | orchestrator | 8c84bb57c54f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-29 03:58:57.200974 | orchestrator | 3d89cc929f1c registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-29 03:58:57.200982 | orchestrator | d97db0f609e5 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-29 03:58:57.200990 | orchestrator | f9a4d33ccccf registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-29 03:58:57.200998 | orchestrator | 704062f9e718 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-29 03:58:57.201006 | orchestrator | 764d3b2ebd2d registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-29 03:58:57.201014 | orchestrator | c7e885dbc2a6 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-29 03:58:57.201038 | orchestrator | ed2255a8a7e3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-29 03:58:57.201047 | orchestrator | 1cd56006ad3d registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-29 03:58:57.201055 | orchestrator | 6a1d63e41fc7 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-29 03:58:57.201063 | orchestrator | 9b51574ba2d4 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-29 03:58:57.201077 | orchestrator | 9e9cc23db1bb registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-03-29 03:58:57.201085 | orchestrator | 9b9035e4c347 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-03-29 03:58:57.201093 | orchestrator | 64abe62fd1d3 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-29 03:58:57.201101 | orchestrator | adf6b8d9089f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-29 03:58:57.201109 | orchestrator | afaac5af07b6 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-29 03:58:57.201117 | orchestrator | 75d39b60cd91 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-29 03:58:57.201125 | orchestrator | 93648f02db66 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-29 03:58:57.201134 | orchestrator | d6572dad5e11 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-03-29 03:58:57.201917 | orchestrator | 6ffe127cd628 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-03-29 03:58:57.201954 | orchestrator | 314344f87944 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-29 03:58:57.201962 | orchestrator | d1b0bf189eda registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-03-29 03:58:57.201972 | orchestrator | d03e01f3e19f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-03-29 03:58:57.201980 | orchestrator | 5b1afc4cf94f registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-03-29 03:58:57.201988 | orchestrator | 80af9f73af45 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-29 03:58:57.202006 | orchestrator | 582a88d83526 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-03-29 03:58:57.202055 | orchestrator | b4d9fd273fd9 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-03-29 03:58:57.202063 | orchestrator | 07ac6aef804a registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-29 03:58:57.202071 | orchestrator | 7afa5495e5e9 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-29 03:58:57.202089 | orchestrator | c0ed36cbbeb0 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-29 03:58:57.202097 | orchestrator | b9b485f81f47 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-03-29 03:58:57.202105 | orchestrator | fc4a475661e4 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-29 03:58:57.202113 | orchestrator | 7da6a6abe11e registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-29 03:58:57.202121 | orchestrator | 90bb4b30380c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-03-29 03:58:57.202129 | orchestrator | cdb66a596606 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-03-29 03:58:57.202137 | orchestrator | d32c2ad37177 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-29 03:58:57.202144 | orchestrator | 02043f0c1c21 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-03-29 03:58:57.202152 | orchestrator | 54b21f009bc0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-03-29 03:58:57.202161 | orchestrator | e9585001e909 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-29 03:58:57.202169 | orchestrator | a6db66d8015c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-29 03:58:57.202186 | orchestrator | c90a9f97c931 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-29 03:58:57.202194 | orchestrator | 9e4a7e3ebe0e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-29 03:58:57.202202 | orchestrator | a4b762b15381 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-29 03:58:57.202210 | orchestrator | 86a5fcb312d1 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-29 03:58:57.202218 | orchestrator | c452feb066b8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-29 03:58:57.202226 | orchestrator | 7a6105150343 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-29 03:58:57.202233 | orchestrator | b8f12290130f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-29 03:58:57.202245 | orchestrator | 624ce16a3c46 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-29 03:58:57.202253 | orchestrator | 5c50aeccd11c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-29 03:58:57.202261 | orchestrator | c4fe7259bdc1 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-29 03:58:57.202269 | orchestrator | 099734e4eefd registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-29 03:58:57.202277 | orchestrator | ca850767e157 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-29 03:58:57.202285 | orchestrator | 3a3d74e2f440 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-29 03:58:57.202298 | orchestrator | fe4df1652463 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-29 03:58:57.202306 | orchestrator | 590ec56c97db registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-29 03:58:57.202314 | orchestrator | d2fd699bb0d7 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-29 03:58:57.202322 | orchestrator | eba1b5ad6861 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-29 03:58:57.202333 | orchestrator | 066f0c1ebed6 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-29 03:58:57.202342 | orchestrator | 2e4ba3adfd85 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-29 03:58:57.502417 | orchestrator | 2026-03-29 03:58:57.502493 | orchestrator | ## Images @ testbed-node-1 2026-03-29 03:58:57.502501 | orchestrator | 2026-03-29 03:58:57.502508 | orchestrator | + echo 2026-03-29 03:58:57.502515 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-29 03:58:57.502521 | orchestrator | + echo 2026-03-29 03:58:57.502527 | orchestrator | + osism container testbed-node-1 images 2026-03-29 03:59:00.033709 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 03:59:00.033790 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 03:59:00.033797 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 03:59:00.033803 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 03:59:00.033810 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 03:59:00.033815 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 03:59:00.033840 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 03:59:00.033846 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 03:59:00.034305 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 03:59:00.034326 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 03:59:00.034332 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 03:59:00.034337 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 03:59:00.034341 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 03:59:00.034346 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 03:59:00.034351 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 03:59:00.034359 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 03:59:00.034366 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 03:59:00.034377 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 03:59:00.034387 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 03:59:00.034393 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 03:59:00.034400 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 03:59:00.034408 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 03:59:00.034415 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 03:59:00.034422 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 03:59:00.034429 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 03:59:00.034436 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 03:59:00.034443 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 03:59:00.034460 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 03:59:00.034475 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-29 03:59:00.034483 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-29 03:59:00.034490 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 03:59:00.034498 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 03:59:00.035363 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-29 03:59:00.035398 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-29 03:59:00.035404 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-29 03:59:00.035408 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-29 03:59:00.035414 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-29 03:59:00.035418 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-29 03:59:00.035423 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-29 03:59:00.035428 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-29 03:59:00.035444 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 03:59:00.035450 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 03:59:00.035454 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 03:59:00.035459 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 03:59:00.035464 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 03:59:00.035468 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 03:59:00.035473 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 03:59:00.035478 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 03:59:00.035482 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 03:59:00.035487 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 03:59:00.035492 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 03:59:00.035496 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 03:59:00.035501 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 03:59:00.035505 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 03:59:00.035510 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 03:59:00.035515 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 03:59:00.035520 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 03:59:00.035525 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 03:59:00.035529 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 03:59:00.035538 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-29 03:59:00.035545 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-29 03:59:00.035550 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 03:59:00.035555 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 03:59:00.035559 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 03:59:00.035564 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 03:59:00.035576 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 03:59:00.035581 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 03:59:00.035586 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 03:59:00.035591 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 03:59:00.035595 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 03:59:00.370070 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 03:59:00.370294 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 03:59:00.433218 | orchestrator | 2026-03-29 03:59:00.433303 | orchestrator | ## Containers @ testbed-node-2 2026-03-29 03:59:00.433313 | orchestrator | 2026-03-29 03:59:00.433320 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 03:59:00.433327 | orchestrator | + echo 2026-03-29 03:59:00.433335 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-29 03:59:00.433342 | orchestrator | + echo 2026-03-29 03:59:00.433349 | orchestrator | + osism container testbed-node-2 ps 2026-03-29 03:59:02.865130 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 03:59:02.865226 | orchestrator | 846628ba3548 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-29 03:59:02.865245 | orchestrator | ca087c8349db registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-29 03:59:02.865259 | orchestrator | bffabce45e12 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-29 03:59:02.865274 | orchestrator | d5acebb623b9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-29 03:59:02.865290 | orchestrator | 9160e1c6fb88 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-29 03:59:02.865304 | orchestrator | 4c82212151b4 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-29 03:59:02.865320 | orchestrator | 75e4335dcbda registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-29 03:59:02.865357 | orchestrator | 9e8395c7d2c1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-29 03:59:02.865367 | orchestrator | 036932c902e7 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-29 03:59:02.865376 | orchestrator | 26c62547a200 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-29 03:59:02.865384 | orchestrator | fb8d571c2ec4 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-29 03:59:02.865392 | orchestrator | 96249f61b923 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-29 03:59:02.865401 | orchestrator | 86b347dd3168 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-03-29 03:59:02.865409 | orchestrator | 6c768fa89cfc registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-29 03:59:02.865425 | orchestrator | 8c7b5b95ba3e registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-29 03:59:02.865438 | orchestrator | 37bcf5907daa registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-29 03:59:02.865448 | orchestrator | ca656d23f0bf registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-03-29 03:59:02.865456 | orchestrator | 4ddc25cbd12f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-29 03:59:02.865464 | orchestrator | 6aecd051b3d9 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-29 03:59:02.865487 | orchestrator | 0244f206cbee registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-03-29 03:59:02.865496 | orchestrator | 6f2d54b3107a registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-03-29 03:59:02.865504 | orchestrator | ac6ad410c130 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-03-29 03:59:02.865512 | orchestrator | 584a84ed1ba3 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-03-29 03:59:02.865520 | orchestrator | 199b2846caea registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-03-29 03:59:02.865528 | orchestrator | c85db2bb543e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-03-29 03:59:02.865548 | orchestrator | b1dee2e5f903 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-03-29 03:59:02.865567 | orchestrator | 1a38db074d6c registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-03-29 03:59:02.865584 | orchestrator | 5fa148647b5e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-03-29 03:59:02.865596 | orchestrator | c2e0eec9d52e registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-03-29 03:59:02.865609 | orchestrator | 1bfcc739f0db registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-03-29 03:59:02.865649 | orchestrator | e94d30d4d433 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-29 03:59:02.865663 | orchestrator | 96cbfad567cd registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-03-29 03:59:02.865683 | orchestrator | b824a1ce18c0 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-03-29 03:59:02.865717 | orchestrator | 5fb35f2be3db registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_volume 2026-03-29 03:59:02.865731 | orchestrator | 5c60dd2de086 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-03-29 03:59:02.865743 | orchestrator | f34dab8b9c12 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-03-29 03:59:02.865756 | orchestrator | d8a058e90025 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-03-29 03:59:02.865769 | orchestrator | dee4148b0526 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-03-29 03:59:02.865782 | orchestrator | c33d80cc3c92 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-03-29 03:59:02.865805 | orchestrator | 07edc0c6a191 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-03-29 03:59:02.865819 | orchestrator | e523dc1d0e2a registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-03-29 03:59:02.865830 | orchestrator | d2abc5a6a260 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-03-29 03:59:02.865853 | orchestrator | 90d49f62faa8 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-03-29 03:59:02.865866 | orchestrator | e68ad2e756c0 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-03-29 03:59:02.865879 | orchestrator | e142d9dbb251 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-03-29 03:59:02.865891 | orchestrator | 7fd3ce9f2320 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-03-29 03:59:02.865905 | orchestrator | bd9e42b4eda7 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-03-29 03:59:02.865917 | orchestrator | 3f2fa65b1895 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-03-29 03:59:02.865930 | orchestrator | 0764c3e76e17 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-03-29 03:59:02.865943 | orchestrator | 8ac64223a2a6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-03-29 03:59:02.865958 | orchestrator | 202b6519157a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-29 03:59:02.865972 | orchestrator | 5a2b09aac491 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-29 03:59:02.865985 | orchestrator | 05a8075d3b69 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-29 03:59:02.865998 | orchestrator | 9253c336a9bb registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-29 03:59:02.866106 | orchestrator | db1a8b755eaa registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-29 03:59:02.866130 | orchestrator | a709bc57c514 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-29 03:59:02.866143 | orchestrator | 13cb593a29bf registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-29 03:59:02.866156 | orchestrator | 76333ba47a8c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-29 03:59:02.866164 | orchestrator | 28e369022677 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-29 03:59:02.866182 | orchestrator | fd8ab51f19a0 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-29 03:59:02.866199 | orchestrator | dc4122845d4b registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-29 03:59:02.866208 | orchestrator | 65dfe1e88bbc registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-29 03:59:02.866216 | orchestrator | a83d4f575a9a registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-29 03:59:02.866224 | orchestrator | 9e5fb41fa56a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-29 03:59:02.866232 | orchestrator | 59cef746f0eb registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-29 03:59:02.866240 | orchestrator | c1af5887e657 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-29 03:59:02.866248 | orchestrator | cd567750859a registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-29 03:59:02.866256 | orchestrator | d1d7331f0616 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-29 03:59:02.866264 | orchestrator | 7e9c00bed205 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-29 03:59:02.866272 | orchestrator | f45be49a1156 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-29 03:59:02.866280 | orchestrator | 2a7d2e1f5b35 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-29 03:59:03.191762 | orchestrator | 2026-03-29 03:59:03.191846 | orchestrator | ## Images @ testbed-node-2 2026-03-29 03:59:03.191857 | orchestrator | 2026-03-29 03:59:03.191865 | orchestrator | + echo 2026-03-29 03:59:03.191873 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-29 03:59:03.191882 | orchestrator | + echo 2026-03-29 03:59:03.191890 | orchestrator | + osism container testbed-node-2 images 2026-03-29 03:59:05.551805 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 03:59:05.551888 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 03:59:05.551913 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 03:59:05.551921 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 03:59:05.551927 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 03:59:05.551933 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 03:59:05.551940 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 03:59:05.551946 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 03:59:05.551967 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 03:59:05.551973 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 03:59:05.551978 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 03:59:05.551987 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 03:59:05.551993 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 03:59:05.551999 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 03:59:05.552004 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 03:59:05.552010 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 03:59:05.552015 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 03:59:05.552021 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 03:59:05.552026 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 03:59:05.552032 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 03:59:05.552037 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 03:59:05.552042 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 03:59:05.552048 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 03:59:05.552054 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 03:59:05.552059 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 03:59:05.552066 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 03:59:05.552072 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 03:59:05.552078 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 03:59:05.552083 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-29 03:59:05.552089 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-29 03:59:05.552095 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 03:59:05.552101 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 03:59:05.552125 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-29 03:59:05.552132 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-29 03:59:05.552137 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-29 03:59:05.552149 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-29 03:59:05.552155 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-29 03:59:05.552161 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-29 03:59:05.552167 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-29 03:59:05.552172 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-29 03:59:05.552177 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 03:59:05.552184 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 03:59:05.552191 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 03:59:05.552197 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 03:59:05.552210 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 03:59:05.552217 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 03:59:05.552223 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 03:59:05.552230 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 03:59:05.552236 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 03:59:05.552243 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 03:59:05.552249 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 03:59:05.552255 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 03:59:05.552365 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 03:59:05.552372 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 03:59:05.552376 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 03:59:05.552380 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 03:59:05.552384 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 03:59:05.552388 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 03:59:05.552392 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 03:59:05.552395 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-29 03:59:05.552399 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-29 03:59:05.552409 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 03:59:05.552413 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 03:59:05.552417 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 03:59:05.552420 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 03:59:05.552424 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 03:59:05.552428 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 03:59:05.552436 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 03:59:05.552440 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 03:59:05.552444 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 03:59:05.894780 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-29 03:59:05.901332 | orchestrator | + set -e 2026-03-29 03:59:05.901491 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 03:59:05.901506 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 03:59:05.901515 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 03:59:05.901524 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 03:59:05.901533 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 03:59:05.901542 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 03:59:05.901552 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 03:59:05.901561 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 03:59:05.901569 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 03:59:05.901578 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 03:59:05.901587 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 03:59:05.901595 | orchestrator | ++ export ARA=false 2026-03-29 03:59:05.901604 | orchestrator | ++ ARA=false 2026-03-29 03:59:05.901612 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 03:59:05.901651 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 03:59:05.901661 | orchestrator | ++ export TEMPEST=false 2026-03-29 03:59:05.901670 | orchestrator | ++ TEMPEST=false 2026-03-29 03:59:05.901678 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 03:59:05.901687 | orchestrator | ++ IS_ZUUL=true 2026-03-29 03:59:05.901696 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 03:59:05.901705 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 03:59:05.901714 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 03:59:05.901723 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 03:59:05.901731 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 03:59:05.901740 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 03:59:05.901750 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 03:59:05.901759 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 03:59:05.901768 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 03:59:05.901776 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 03:59:05.901785 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 03:59:05.901795 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-29 03:59:05.910706 | orchestrator | + set -e 2026-03-29 03:59:05.910800 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 03:59:05.910819 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 03:59:05.910833 | orchestrator | ++ INTERACTIVE=false 2026-03-29 03:59:05.910846 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 03:59:05.910859 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 03:59:05.910872 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 03:59:05.912115 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 03:59:05.918317 | orchestrator | 2026-03-29 03:59:05.918388 | orchestrator | # Ceph status 2026-03-29 03:59:05.918397 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 03:59:05.918425 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 03:59:05.918433 | orchestrator | + echo 2026-03-29 03:59:05.918440 | orchestrator | + echo '# Ceph status' 2026-03-29 03:59:05.918492 | orchestrator | 2026-03-29 03:59:05.918499 | orchestrator | + echo 2026-03-29 03:59:05.918505 | orchestrator | + ceph -s 2026-03-29 03:59:06.527779 | orchestrator | cluster: 2026-03-29 03:59:06.527910 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-29 03:59:06.527936 | orchestrator | health: HEALTH_OK 2026-03-29 03:59:06.527953 | orchestrator | 2026-03-29 03:59:06.527970 | orchestrator | services: 2026-03-29 03:59:06.527988 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 69m) 2026-03-29 03:59:06.528007 | orchestrator | mgr: testbed-node-0(active, since 56m), standbys: testbed-node-1, testbed-node-2 2026-03-29 03:59:06.528024 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-29 03:59:06.528041 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 66m) 2026-03-29 03:59:06.528056 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-29 03:59:06.528074 | orchestrator | 2026-03-29 03:59:06.528089 | orchestrator | data: 2026-03-29 03:59:06.528106 | orchestrator | volumes: 1/1 healthy 2026-03-29 03:59:06.528122 | orchestrator | pools: 14 pools, 417 pgs 2026-03-29 03:59:06.528138 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-29 03:59:06.528154 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-29 03:59:06.528171 | orchestrator | pgs: 417 active+clean 2026-03-29 03:59:06.528187 | orchestrator | 2026-03-29 03:59:06.572658 | orchestrator | 2026-03-29 03:59:06.572735 | orchestrator | # Ceph versions 2026-03-29 03:59:06.572742 | orchestrator | 2026-03-29 03:59:06.572748 | orchestrator | + echo 2026-03-29 03:59:06.572752 | orchestrator | + echo '# Ceph versions' 2026-03-29 03:59:06.572757 | orchestrator | + echo 2026-03-29 03:59:06.572761 | orchestrator | + ceph versions 2026-03-29 03:59:07.171911 | orchestrator | { 2026-03-29 03:59:07.172001 | orchestrator | "mon": { 2026-03-29 03:59:07.172010 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 03:59:07.172019 | orchestrator | }, 2026-03-29 03:59:07.172025 | orchestrator | "mgr": { 2026-03-29 03:59:07.172031 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 03:59:07.172038 | orchestrator | }, 2026-03-29 03:59:07.172044 | orchestrator | "osd": { 2026-03-29 03:59:07.172050 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-29 03:59:07.172057 | orchestrator | }, 2026-03-29 03:59:07.172063 | orchestrator | "mds": { 2026-03-29 03:59:07.172069 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 03:59:07.172074 | orchestrator | }, 2026-03-29 03:59:07.172080 | orchestrator | "rgw": { 2026-03-29 03:59:07.172087 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 03:59:07.172094 | orchestrator | }, 2026-03-29 03:59:07.172100 | orchestrator | "overall": { 2026-03-29 03:59:07.172106 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-29 03:59:07.172112 | orchestrator | } 2026-03-29 03:59:07.172117 | orchestrator | } 2026-03-29 03:59:07.233301 | orchestrator | 2026-03-29 03:59:07.233395 | orchestrator | # Ceph OSD tree 2026-03-29 03:59:07.233407 | orchestrator | 2026-03-29 03:59:07.233415 | orchestrator | + echo 2026-03-29 03:59:07.233424 | orchestrator | + echo '# Ceph OSD tree' 2026-03-29 03:59:07.233432 | orchestrator | + echo 2026-03-29 03:59:07.233439 | orchestrator | + ceph osd df tree 2026-03-29 03:59:07.765744 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-29 03:59:07.765905 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 406 MiB 113 GiB 5.90 1.00 - root default 2026-03-29 03:59:07.765919 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 0.99 - host testbed-node-3 2026-03-29 03:59:07.765926 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.67 0.96 196 up osd.0 2026-03-29 03:59:07.765932 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.06 1.03 210 up osd.3 2026-03-29 03:59:07.765939 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-03-29 03:59:07.765985 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.97 1.01 199 up osd.1 2026-03-29 03:59:07.765993 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.85 0.99 209 up osd.4 2026-03-29 03:59:07.766000 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-29 03:59:07.766007 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 78 MiB 19 GiB 7.16 1.21 198 up osd.2 2026-03-29 03:59:07.766076 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 957 MiB 891 MiB 1 KiB 66 MiB 19 GiB 4.68 0.79 206 up osd.5 2026-03-29 03:59:07.766093 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 406 MiB 113 GiB 5.90 2026-03-29 03:59:07.766104 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.73 2026-03-29 03:59:07.809574 | orchestrator | 2026-03-29 03:59:07.809719 | orchestrator | # Ceph monitor status 2026-03-29 03:59:07.809745 | orchestrator | 2026-03-29 03:59:07.809762 | orchestrator | + echo 2026-03-29 03:59:07.809774 | orchestrator | + echo '# Ceph monitor status' 2026-03-29 03:59:07.809789 | orchestrator | + echo 2026-03-29 03:59:07.809809 | orchestrator | + ceph mon stat 2026-03-29 03:59:08.420350 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-29 03:59:08.465152 | orchestrator | 2026-03-29 03:59:08.465232 | orchestrator | # Ceph quorum status 2026-03-29 03:59:08.465243 | orchestrator | 2026-03-29 03:59:08.465252 | orchestrator | + echo 2026-03-29 03:59:08.465260 | orchestrator | + echo '# Ceph quorum status' 2026-03-29 03:59:08.465268 | orchestrator | + echo 2026-03-29 03:59:08.465714 | orchestrator | + jq 2026-03-29 03:59:08.465735 | orchestrator | + ceph quorum_status 2026-03-29 03:59:09.098367 | orchestrator | { 2026-03-29 03:59:09.098521 | orchestrator | "election_epoch": 6, 2026-03-29 03:59:09.098538 | orchestrator | "quorum": [ 2026-03-29 03:59:09.098546 | orchestrator | 0, 2026-03-29 03:59:09.098552 | orchestrator | 1, 2026-03-29 03:59:09.098559 | orchestrator | 2 2026-03-29 03:59:09.098566 | orchestrator | ], 2026-03-29 03:59:09.098572 | orchestrator | "quorum_names": [ 2026-03-29 03:59:09.098579 | orchestrator | "testbed-node-0", 2026-03-29 03:59:09.098586 | orchestrator | "testbed-node-1", 2026-03-29 03:59:09.098593 | orchestrator | "testbed-node-2" 2026-03-29 03:59:09.098599 | orchestrator | ], 2026-03-29 03:59:09.098606 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-29 03:59:09.098613 | orchestrator | "quorum_age": 4175, 2026-03-29 03:59:09.098640 | orchestrator | "features": { 2026-03-29 03:59:09.098645 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-29 03:59:09.098649 | orchestrator | "quorum_mon": [ 2026-03-29 03:59:09.098653 | orchestrator | "kraken", 2026-03-29 03:59:09.098657 | orchestrator | "luminous", 2026-03-29 03:59:09.098661 | orchestrator | "mimic", 2026-03-29 03:59:09.098665 | orchestrator | "osdmap-prune", 2026-03-29 03:59:09.098669 | orchestrator | "nautilus", 2026-03-29 03:59:09.098672 | orchestrator | "octopus", 2026-03-29 03:59:09.098676 | orchestrator | "pacific", 2026-03-29 03:59:09.098680 | orchestrator | "elector-pinging", 2026-03-29 03:59:09.098684 | orchestrator | "quincy", 2026-03-29 03:59:09.098688 | orchestrator | "reef" 2026-03-29 03:59:09.098692 | orchestrator | ] 2026-03-29 03:59:09.098695 | orchestrator | }, 2026-03-29 03:59:09.098699 | orchestrator | "monmap": { 2026-03-29 03:59:09.098703 | orchestrator | "epoch": 1, 2026-03-29 03:59:09.098707 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-29 03:59:09.098712 | orchestrator | "modified": "2026-03-29T02:49:11.157285Z", 2026-03-29 03:59:09.098717 | orchestrator | "created": "2026-03-29T02:49:11.157285Z", 2026-03-29 03:59:09.098723 | orchestrator | "min_mon_release": 18, 2026-03-29 03:59:09.098729 | orchestrator | "min_mon_release_name": "reef", 2026-03-29 03:59:09.098735 | orchestrator | "election_strategy": 1, 2026-03-29 03:59:09.098741 | orchestrator | "disallowed_leaders: ": "", 2026-03-29 03:59:09.098747 | orchestrator | "stretch_mode": false, 2026-03-29 03:59:09.098753 | orchestrator | "tiebreaker_mon": "", 2026-03-29 03:59:09.098786 | orchestrator | "removed_ranks: ": "", 2026-03-29 03:59:09.098793 | orchestrator | "features": { 2026-03-29 03:59:09.098799 | orchestrator | "persistent": [ 2026-03-29 03:59:09.098806 | orchestrator | "kraken", 2026-03-29 03:59:09.098812 | orchestrator | "luminous", 2026-03-29 03:59:09.098819 | orchestrator | "mimic", 2026-03-29 03:59:09.098826 | orchestrator | "osdmap-prune", 2026-03-29 03:59:09.098831 | orchestrator | "nautilus", 2026-03-29 03:59:09.098838 | orchestrator | "octopus", 2026-03-29 03:59:09.098845 | orchestrator | "pacific", 2026-03-29 03:59:09.098849 | orchestrator | "elector-pinging", 2026-03-29 03:59:09.098853 | orchestrator | "quincy", 2026-03-29 03:59:09.098856 | orchestrator | "reef" 2026-03-29 03:59:09.098861 | orchestrator | ], 2026-03-29 03:59:09.098866 | orchestrator | "optional": [] 2026-03-29 03:59:09.098873 | orchestrator | }, 2026-03-29 03:59:09.098879 | orchestrator | "mons": [ 2026-03-29 03:59:09.098885 | orchestrator | { 2026-03-29 03:59:09.098891 | orchestrator | "rank": 0, 2026-03-29 03:59:09.098898 | orchestrator | "name": "testbed-node-0", 2026-03-29 03:59:09.098904 | orchestrator | "public_addrs": { 2026-03-29 03:59:09.098911 | orchestrator | "addrvec": [ 2026-03-29 03:59:09.098917 | orchestrator | { 2026-03-29 03:59:09.098924 | orchestrator | "type": "v2", 2026-03-29 03:59:09.098932 | orchestrator | "addr": "192.168.16.8:3300", 2026-03-29 03:59:09.098938 | orchestrator | "nonce": 0 2026-03-29 03:59:09.098943 | orchestrator | }, 2026-03-29 03:59:09.098950 | orchestrator | { 2026-03-29 03:59:09.098956 | orchestrator | "type": "v1", 2026-03-29 03:59:09.098962 | orchestrator | "addr": "192.168.16.8:6789", 2026-03-29 03:59:09.098970 | orchestrator | "nonce": 0 2026-03-29 03:59:09.098976 | orchestrator | } 2026-03-29 03:59:09.098982 | orchestrator | ] 2026-03-29 03:59:09.098989 | orchestrator | }, 2026-03-29 03:59:09.098995 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-03-29 03:59:09.099001 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-03-29 03:59:09.099008 | orchestrator | "priority": 0, 2026-03-29 03:59:09.099016 | orchestrator | "weight": 0, 2026-03-29 03:59:09.099022 | orchestrator | "crush_location": "{}" 2026-03-29 03:59:09.099029 | orchestrator | }, 2026-03-29 03:59:09.099036 | orchestrator | { 2026-03-29 03:59:09.099042 | orchestrator | "rank": 1, 2026-03-29 03:59:09.099050 | orchestrator | "name": "testbed-node-1", 2026-03-29 03:59:09.099058 | orchestrator | "public_addrs": { 2026-03-29 03:59:09.099064 | orchestrator | "addrvec": [ 2026-03-29 03:59:09.099071 | orchestrator | { 2026-03-29 03:59:09.099079 | orchestrator | "type": "v2", 2026-03-29 03:59:09.099102 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-29 03:59:09.099110 | orchestrator | "nonce": 0 2026-03-29 03:59:09.099116 | orchestrator | }, 2026-03-29 03:59:09.099122 | orchestrator | { 2026-03-29 03:59:09.099129 | orchestrator | "type": "v1", 2026-03-29 03:59:09.099136 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-29 03:59:09.099142 | orchestrator | "nonce": 0 2026-03-29 03:59:09.099149 | orchestrator | } 2026-03-29 03:59:09.099155 | orchestrator | ] 2026-03-29 03:59:09.099162 | orchestrator | }, 2026-03-29 03:59:09.099168 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-29 03:59:09.099175 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-29 03:59:09.099182 | orchestrator | "priority": 0, 2026-03-29 03:59:09.099188 | orchestrator | "weight": 0, 2026-03-29 03:59:09.099195 | orchestrator | "crush_location": "{}" 2026-03-29 03:59:09.099201 | orchestrator | }, 2026-03-29 03:59:09.099208 | orchestrator | { 2026-03-29 03:59:09.099214 | orchestrator | "rank": 2, 2026-03-29 03:59:09.099221 | orchestrator | "name": "testbed-node-2", 2026-03-29 03:59:09.099227 | orchestrator | "public_addrs": { 2026-03-29 03:59:09.099235 | orchestrator | "addrvec": [ 2026-03-29 03:59:09.099241 | orchestrator | { 2026-03-29 03:59:09.099248 | orchestrator | "type": "v2", 2026-03-29 03:59:09.099254 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-29 03:59:09.099261 | orchestrator | "nonce": 0 2026-03-29 03:59:09.099267 | orchestrator | }, 2026-03-29 03:59:09.099273 | orchestrator | { 2026-03-29 03:59:09.099279 | orchestrator | "type": "v1", 2026-03-29 03:59:09.099286 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-29 03:59:09.099292 | orchestrator | "nonce": 0 2026-03-29 03:59:09.099298 | orchestrator | } 2026-03-29 03:59:09.099305 | orchestrator | ] 2026-03-29 03:59:09.099311 | orchestrator | }, 2026-03-29 03:59:09.099326 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-29 03:59:09.099332 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-29 03:59:09.099339 | orchestrator | "priority": 0, 2026-03-29 03:59:09.099345 | orchestrator | "weight": 0, 2026-03-29 03:59:09.099351 | orchestrator | "crush_location": "{}" 2026-03-29 03:59:09.099357 | orchestrator | } 2026-03-29 03:59:09.099364 | orchestrator | ] 2026-03-29 03:59:09.099370 | orchestrator | } 2026-03-29 03:59:09.099376 | orchestrator | } 2026-03-29 03:59:09.099486 | orchestrator | 2026-03-29 03:59:09.099495 | orchestrator | # Ceph free space status 2026-03-29 03:59:09.099501 | orchestrator | 2026-03-29 03:59:09.099508 | orchestrator | + echo 2026-03-29 03:59:09.099520 | orchestrator | + echo '# Ceph free space status' 2026-03-29 03:59:09.099526 | orchestrator | + echo 2026-03-29 03:59:09.099533 | orchestrator | + ceph df 2026-03-29 03:59:09.671279 | orchestrator | --- RAW STORAGE --- 2026-03-29 03:59:09.671493 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-29 03:59:09.671538 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-03-29 03:59:09.671556 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-03-29 03:59:09.671575 | orchestrator | 2026-03-29 03:59:09.671593 | orchestrator | --- POOLS --- 2026-03-29 03:59:09.671612 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-29 03:59:09.671686 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-29 03:59:09.671703 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-29 03:59:09.671719 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-29 03:59:09.671736 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-29 03:59:09.671752 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-29 03:59:09.671768 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-29 03:59:09.671783 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-29 03:59:09.671797 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-29 03:59:09.671811 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-03-29 03:59:09.671827 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 03:59:09.671844 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 03:59:09.671861 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-03-29 03:59:09.671877 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 03:59:09.671891 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 03:59:09.714708 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 03:59:09.772384 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 03:59:09.772453 | orchestrator | + osism apply facts 2026-03-29 03:59:11.850249 | orchestrator | 2026-03-29 03:59:11 | INFO  | Task 4f761932-4821-4f3a-9988-fb1cada4b506 (facts) was prepared for execution. 2026-03-29 03:59:11.850344 | orchestrator | 2026-03-29 03:59:11 | INFO  | It takes a moment until task 4f761932-4821-4f3a-9988-fb1cada4b506 (facts) has been started and output is visible here. 2026-03-29 03:59:25.518557 | orchestrator | 2026-03-29 03:59:25.518706 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 03:59:25.518718 | orchestrator | 2026-03-29 03:59:25.518726 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 03:59:25.518733 | orchestrator | Sunday 29 March 2026 03:59:16 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-03-29 03:59:25.518738 | orchestrator | ok: [testbed-manager] 2026-03-29 03:59:25.518746 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:25.518752 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:25.518759 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:25.518766 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:59:25.518773 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:59:25.518779 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:59:25.518846 | orchestrator | 2026-03-29 03:59:25.518854 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 03:59:25.518861 | orchestrator | Sunday 29 March 2026 03:59:17 +0000 (0:00:01.235) 0:00:01.525 ********** 2026-03-29 03:59:25.518868 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:59:25.518876 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:25.518882 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:59:25.518889 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:59:25.518896 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:59:25.518902 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:59:25.518909 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:59:25.518915 | orchestrator | 2026-03-29 03:59:25.518922 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 03:59:25.518928 | orchestrator | 2026-03-29 03:59:25.518935 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 03:59:25.518941 | orchestrator | Sunday 29 March 2026 03:59:18 +0000 (0:00:01.446) 0:00:02.971 ********** 2026-03-29 03:59:25.518947 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:25.518954 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:25.518960 | orchestrator | ok: [testbed-manager] 2026-03-29 03:59:25.518967 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:25.518973 | orchestrator | ok: [testbed-node-3] 2026-03-29 03:59:25.518979 | orchestrator | ok: [testbed-node-4] 2026-03-29 03:59:25.518985 | orchestrator | ok: [testbed-node-5] 2026-03-29 03:59:25.518992 | orchestrator | 2026-03-29 03:59:25.518998 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 03:59:25.519005 | orchestrator | 2026-03-29 03:59:25.519012 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 03:59:25.519018 | orchestrator | Sunday 29 March 2026 03:59:24 +0000 (0:00:05.401) 0:00:08.373 ********** 2026-03-29 03:59:25.519025 | orchestrator | skipping: [testbed-manager] 2026-03-29 03:59:25.519031 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:25.519038 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:59:25.519044 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:59:25.519051 | orchestrator | skipping: [testbed-node-3] 2026-03-29 03:59:25.519057 | orchestrator | skipping: [testbed-node-4] 2026-03-29 03:59:25.519064 | orchestrator | skipping: [testbed-node-5] 2026-03-29 03:59:25.519070 | orchestrator | 2026-03-29 03:59:25.519077 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 03:59:25.519084 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519092 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519112 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519120 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519126 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519133 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519140 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 03:59:25.519147 | orchestrator | 2026-03-29 03:59:25.519154 | orchestrator | 2026-03-29 03:59:25.519161 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 03:59:25.519168 | orchestrator | Sunday 29 March 2026 03:59:25 +0000 (0:00:00.607) 0:00:08.981 ********** 2026-03-29 03:59:25.519174 | orchestrator | =============================================================================== 2026-03-29 03:59:25.519186 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.40s 2026-03-29 03:59:25.519193 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2026-03-29 03:59:25.519200 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-03-29 03:59:25.519207 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-03-29 03:59:25.890858 | orchestrator | + osism validate ceph-mons 2026-03-29 03:59:58.758870 | orchestrator | 2026-03-29 03:59:58.758999 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-29 03:59:58.759012 | orchestrator | 2026-03-29 03:59:58.759019 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 03:59:58.759027 | orchestrator | Sunday 29 March 2026 03:59:42 +0000 (0:00:00.446) 0:00:00.446 ********** 2026-03-29 03:59:58.759035 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759042 | orchestrator | 2026-03-29 03:59:58.759048 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 03:59:58.759056 | orchestrator | Sunday 29 March 2026 03:59:43 +0000 (0:00:00.881) 0:00:01.328 ********** 2026-03-29 03:59:58.759062 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759069 | orchestrator | 2026-03-29 03:59:58.759075 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 03:59:58.759082 | orchestrator | Sunday 29 March 2026 03:59:44 +0000 (0:00:01.048) 0:00:02.376 ********** 2026-03-29 03:59:58.759088 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759096 | orchestrator | 2026-03-29 03:59:58.759102 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 03:59:58.759109 | orchestrator | Sunday 29 March 2026 03:59:44 +0000 (0:00:00.125) 0:00:02.501 ********** 2026-03-29 03:59:58.759115 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759121 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:58.759128 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:58.759134 | orchestrator | 2026-03-29 03:59:58.759141 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 03:59:58.759147 | orchestrator | Sunday 29 March 2026 03:59:45 +0000 (0:00:00.308) 0:00:02.810 ********** 2026-03-29 03:59:58.759154 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759160 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:58.759167 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:58.759173 | orchestrator | 2026-03-29 03:59:58.759179 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 03:59:58.759186 | orchestrator | Sunday 29 March 2026 03:59:46 +0000 (0:00:01.042) 0:00:03.853 ********** 2026-03-29 03:59:58.759192 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759199 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:59:58.759205 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:59:58.759211 | orchestrator | 2026-03-29 03:59:58.759217 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 03:59:58.759224 | orchestrator | Sunday 29 March 2026 03:59:46 +0000 (0:00:00.282) 0:00:04.135 ********** 2026-03-29 03:59:58.759230 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759237 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:58.759243 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:58.759250 | orchestrator | 2026-03-29 03:59:58.759256 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 03:59:58.759263 | orchestrator | Sunday 29 March 2026 03:59:46 +0000 (0:00:00.508) 0:00:04.644 ********** 2026-03-29 03:59:58.759269 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759275 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:58.759281 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:58.759287 | orchestrator | 2026-03-29 03:59:58.759294 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-29 03:59:58.759300 | orchestrator | Sunday 29 March 2026 03:59:47 +0000 (0:00:00.299) 0:00:04.944 ********** 2026-03-29 03:59:58.759325 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759332 | orchestrator | skipping: [testbed-node-1] 2026-03-29 03:59:58.759339 | orchestrator | skipping: [testbed-node-2] 2026-03-29 03:59:58.759345 | orchestrator | 2026-03-29 03:59:58.759352 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-29 03:59:58.759358 | orchestrator | Sunday 29 March 2026 03:59:47 +0000 (0:00:00.296) 0:00:05.241 ********** 2026-03-29 03:59:58.759365 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759371 | orchestrator | ok: [testbed-node-1] 2026-03-29 03:59:58.759377 | orchestrator | ok: [testbed-node-2] 2026-03-29 03:59:58.759383 | orchestrator | 2026-03-29 03:59:58.759391 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 03:59:58.759397 | orchestrator | Sunday 29 March 2026 03:59:47 +0000 (0:00:00.487) 0:00:05.728 ********** 2026-03-29 03:59:58.759404 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759411 | orchestrator | 2026-03-29 03:59:58.759418 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 03:59:58.759425 | orchestrator | Sunday 29 March 2026 03:59:48 +0000 (0:00:00.294) 0:00:06.022 ********** 2026-03-29 03:59:58.759432 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759438 | orchestrator | 2026-03-29 03:59:58.759444 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 03:59:58.759450 | orchestrator | Sunday 29 March 2026 03:59:48 +0000 (0:00:00.280) 0:00:06.303 ********** 2026-03-29 03:59:58.759457 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759463 | orchestrator | 2026-03-29 03:59:58.759470 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 03:59:58.759476 | orchestrator | Sunday 29 March 2026 03:59:48 +0000 (0:00:00.258) 0:00:06.561 ********** 2026-03-29 03:59:58.759482 | orchestrator | 2026-03-29 03:59:58.759489 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 03:59:58.759495 | orchestrator | Sunday 29 March 2026 03:59:48 +0000 (0:00:00.078) 0:00:06.639 ********** 2026-03-29 03:59:58.759502 | orchestrator | 2026-03-29 03:59:58.759509 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 03:59:58.759515 | orchestrator | Sunday 29 March 2026 03:59:48 +0000 (0:00:00.071) 0:00:06.711 ********** 2026-03-29 03:59:58.759522 | orchestrator | 2026-03-29 03:59:58.759528 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 03:59:58.759532 | orchestrator | Sunday 29 March 2026 03:59:49 +0000 (0:00:00.075) 0:00:06.787 ********** 2026-03-29 03:59:58.759537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759541 | orchestrator | 2026-03-29 03:59:58.759546 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 03:59:58.759550 | orchestrator | Sunday 29 March 2026 03:59:49 +0000 (0:00:00.256) 0:00:07.043 ********** 2026-03-29 03:59:58.759555 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759559 | orchestrator | 2026-03-29 03:59:58.759575 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-29 03:59:58.759580 | orchestrator | Sunday 29 March 2026 03:59:49 +0000 (0:00:00.234) 0:00:07.278 ********** 2026-03-29 03:59:58.759584 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759589 | orchestrator | 2026-03-29 03:59:58.759613 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-29 03:59:58.759618 | orchestrator | Sunday 29 March 2026 03:59:49 +0000 (0:00:00.129) 0:00:07.407 ********** 2026-03-29 03:59:58.759623 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:59:58.759627 | orchestrator | 2026-03-29 03:59:58.759634 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-29 03:59:58.759638 | orchestrator | Sunday 29 March 2026 03:59:51 +0000 (0:00:01.615) 0:00:09.023 ********** 2026-03-29 03:59:58.759643 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759647 | orchestrator | 2026-03-29 03:59:58.759652 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-29 03:59:58.759661 | orchestrator | Sunday 29 March 2026 03:59:51 +0000 (0:00:00.525) 0:00:09.549 ********** 2026-03-29 03:59:58.759666 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759670 | orchestrator | 2026-03-29 03:59:58.759685 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-29 03:59:58.759690 | orchestrator | Sunday 29 March 2026 03:59:51 +0000 (0:00:00.120) 0:00:09.669 ********** 2026-03-29 03:59:58.759694 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759698 | orchestrator | 2026-03-29 03:59:58.759703 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-29 03:59:58.759707 | orchestrator | Sunday 29 March 2026 03:59:52 +0000 (0:00:00.329) 0:00:09.998 ********** 2026-03-29 03:59:58.759712 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759716 | orchestrator | 2026-03-29 03:59:58.759720 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-29 03:59:58.759725 | orchestrator | Sunday 29 March 2026 03:59:52 +0000 (0:00:00.323) 0:00:10.321 ********** 2026-03-29 03:59:58.759729 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759733 | orchestrator | 2026-03-29 03:59:58.759737 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-29 03:59:58.759740 | orchestrator | Sunday 29 March 2026 03:59:52 +0000 (0:00:00.115) 0:00:10.436 ********** 2026-03-29 03:59:58.759744 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759748 | orchestrator | 2026-03-29 03:59:58.759752 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-29 03:59:58.759755 | orchestrator | Sunday 29 March 2026 03:59:52 +0000 (0:00:00.137) 0:00:10.574 ********** 2026-03-29 03:59:58.759759 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759763 | orchestrator | 2026-03-29 03:59:58.759766 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-29 03:59:58.759770 | orchestrator | Sunday 29 March 2026 03:59:52 +0000 (0:00:00.126) 0:00:10.701 ********** 2026-03-29 03:59:58.759774 | orchestrator | changed: [testbed-node-0] 2026-03-29 03:59:58.759778 | orchestrator | 2026-03-29 03:59:58.759781 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-29 03:59:58.759785 | orchestrator | Sunday 29 March 2026 03:59:54 +0000 (0:00:01.499) 0:00:12.200 ********** 2026-03-29 03:59:58.759789 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759792 | orchestrator | 2026-03-29 03:59:58.759796 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-29 03:59:58.759800 | orchestrator | Sunday 29 March 2026 03:59:54 +0000 (0:00:00.323) 0:00:12.523 ********** 2026-03-29 03:59:58.759803 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759807 | orchestrator | 2026-03-29 03:59:58.759811 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-29 03:59:58.759815 | orchestrator | Sunday 29 March 2026 03:59:54 +0000 (0:00:00.164) 0:00:12.688 ********** 2026-03-29 03:59:58.759818 | orchestrator | ok: [testbed-node-0] 2026-03-29 03:59:58.759822 | orchestrator | 2026-03-29 03:59:58.759826 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-29 03:59:58.759829 | orchestrator | Sunday 29 March 2026 03:59:55 +0000 (0:00:00.131) 0:00:12.819 ********** 2026-03-29 03:59:58.759833 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759837 | orchestrator | 2026-03-29 03:59:58.759843 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-29 03:59:58.759847 | orchestrator | Sunday 29 March 2026 03:59:55 +0000 (0:00:00.148) 0:00:12.967 ********** 2026-03-29 03:59:58.759850 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759854 | orchestrator | 2026-03-29 03:59:58.759858 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 03:59:58.759862 | orchestrator | Sunday 29 March 2026 03:59:55 +0000 (0:00:00.396) 0:00:13.364 ********** 2026-03-29 03:59:58.759865 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759869 | orchestrator | 2026-03-29 03:59:58.759873 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 03:59:58.759880 | orchestrator | Sunday 29 March 2026 03:59:55 +0000 (0:00:00.260) 0:00:13.624 ********** 2026-03-29 03:59:58.759883 | orchestrator | skipping: [testbed-node-0] 2026-03-29 03:59:58.759887 | orchestrator | 2026-03-29 03:59:58.759891 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 03:59:58.759894 | orchestrator | Sunday 29 March 2026 03:59:56 +0000 (0:00:00.260) 0:00:13.885 ********** 2026-03-29 03:59:58.759898 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759902 | orchestrator | 2026-03-29 03:59:58.759906 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 03:59:58.759909 | orchestrator | Sunday 29 March 2026 03:59:57 +0000 (0:00:01.835) 0:00:15.720 ********** 2026-03-29 03:59:58.759913 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759917 | orchestrator | 2026-03-29 03:59:58.759920 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 03:59:58.759924 | orchestrator | Sunday 29 March 2026 03:59:58 +0000 (0:00:00.286) 0:00:16.007 ********** 2026-03-29 03:59:58.759928 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 03:59:58.759931 | orchestrator | 2026-03-29 03:59:58.759938 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:01.501075 | orchestrator | Sunday 29 March 2026 03:59:58 +0000 (0:00:00.302) 0:00:16.310 ********** 2026-03-29 04:00:01.501188 | orchestrator | 2026-03-29 04:00:01.501239 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:01.501253 | orchestrator | Sunday 29 March 2026 03:59:58 +0000 (0:00:00.072) 0:00:16.383 ********** 2026-03-29 04:00:01.501266 | orchestrator | 2026-03-29 04:00:01.501279 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:01.501292 | orchestrator | Sunday 29 March 2026 03:59:58 +0000 (0:00:00.071) 0:00:16.454 ********** 2026-03-29 04:00:01.501306 | orchestrator | 2026-03-29 04:00:01.501319 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 04:00:01.501332 | orchestrator | Sunday 29 March 2026 03:59:58 +0000 (0:00:00.075) 0:00:16.529 ********** 2026-03-29 04:00:01.501345 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:01.501358 | orchestrator | 2026-03-29 04:00:01.501372 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 04:00:01.501386 | orchestrator | Sunday 29 March 2026 04:00:00 +0000 (0:00:01.562) 0:00:18.092 ********** 2026-03-29 04:00:01.501400 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 04:00:01.501414 | orchestrator |  "msg": [ 2026-03-29 04:00:01.501429 | orchestrator |  "Validator run completed.", 2026-03-29 04:00:01.501442 | orchestrator |  "You can find the report file here:", 2026-03-29 04:00:01.501455 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-29T03:59:43+00:00-report.json", 2026-03-29 04:00:01.501469 | orchestrator |  "on the following host:", 2026-03-29 04:00:01.501482 | orchestrator |  "testbed-manager" 2026-03-29 04:00:01.501495 | orchestrator |  ] 2026-03-29 04:00:01.501508 | orchestrator | } 2026-03-29 04:00:01.501522 | orchestrator | 2026-03-29 04:00:01.501562 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:00:01.501577 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-29 04:00:01.501625 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:00:01.501643 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:00:01.501655 | orchestrator | 2026-03-29 04:00:01.501668 | orchestrator | 2026-03-29 04:00:01.501682 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:00:01.501724 | orchestrator | Sunday 29 March 2026 04:00:01 +0000 (0:00:00.848) 0:00:18.940 ********** 2026-03-29 04:00:01.501739 | orchestrator | =============================================================================== 2026-03-29 04:00:01.501752 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2026-03-29 04:00:01.501765 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.62s 2026-03-29 04:00:01.501778 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2026-03-29 04:00:01.501791 | orchestrator | Gather status data ------------------------------------------------------ 1.50s 2026-03-29 04:00:01.501804 | orchestrator | Create report output directory ------------------------------------------ 1.05s 2026-03-29 04:00:01.501817 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-03-29 04:00:01.501830 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-03-29 04:00:01.501842 | orchestrator | Print report file information ------------------------------------------- 0.85s 2026-03-29 04:00:01.501873 | orchestrator | Set quorum test data ---------------------------------------------------- 0.53s 2026-03-29 04:00:01.501887 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-03-29 04:00:01.501901 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.49s 2026-03-29 04:00:01.501913 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.40s 2026-03-29 04:00:01.501927 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-03-29 04:00:01.501939 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-03-29 04:00:01.501953 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-03-29 04:00:01.501967 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-29 04:00:01.501981 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-03-29 04:00:01.501995 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-29 04:00:01.502007 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-03-29 04:00:01.502085 | orchestrator | Aggregate test results step one ----------------------------------------- 0.29s 2026-03-29 04:00:01.866474 | orchestrator | + osism validate ceph-mgrs 2026-03-29 04:00:33.653498 | orchestrator | 2026-03-29 04:00:33.653628 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-29 04:00:33.653643 | orchestrator | 2026-03-29 04:00:33.653650 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 04:00:33.653658 | orchestrator | Sunday 29 March 2026 04:00:18 +0000 (0:00:00.429) 0:00:00.429 ********** 2026-03-29 04:00:33.653665 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.653671 | orchestrator | 2026-03-29 04:00:33.653678 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 04:00:33.653694 | orchestrator | Sunday 29 March 2026 04:00:19 +0000 (0:00:00.884) 0:00:01.314 ********** 2026-03-29 04:00:33.653701 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.653714 | orchestrator | 2026-03-29 04:00:33.653721 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 04:00:33.653727 | orchestrator | Sunday 29 March 2026 04:00:20 +0000 (0:00:00.996) 0:00:02.310 ********** 2026-03-29 04:00:33.653734 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.653741 | orchestrator | 2026-03-29 04:00:33.653748 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 04:00:33.653754 | orchestrator | Sunday 29 March 2026 04:00:20 +0000 (0:00:00.129) 0:00:02.440 ********** 2026-03-29 04:00:33.653761 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.653767 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:00:33.653774 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:00:33.653780 | orchestrator | 2026-03-29 04:00:33.653809 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 04:00:33.653817 | orchestrator | Sunday 29 March 2026 04:00:21 +0000 (0:00:00.291) 0:00:02.731 ********** 2026-03-29 04:00:33.653823 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.653830 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:00:33.653836 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:00:33.653842 | orchestrator | 2026-03-29 04:00:33.653849 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 04:00:33.653856 | orchestrator | Sunday 29 March 2026 04:00:22 +0000 (0:00:01.055) 0:00:03.786 ********** 2026-03-29 04:00:33.653862 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.653868 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:00:33.653874 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:00:33.653881 | orchestrator | 2026-03-29 04:00:33.653887 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 04:00:33.653893 | orchestrator | Sunday 29 March 2026 04:00:22 +0000 (0:00:00.285) 0:00:04.071 ********** 2026-03-29 04:00:33.653899 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.653906 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:00:33.653912 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:00:33.653918 | orchestrator | 2026-03-29 04:00:33.653924 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:00:33.653930 | orchestrator | Sunday 29 March 2026 04:00:22 +0000 (0:00:00.521) 0:00:04.593 ********** 2026-03-29 04:00:33.653935 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.653941 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:00:33.653947 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:00:33.653954 | orchestrator | 2026-03-29 04:00:33.653960 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-29 04:00:33.653967 | orchestrator | Sunday 29 March 2026 04:00:23 +0000 (0:00:00.361) 0:00:04.954 ********** 2026-03-29 04:00:33.653973 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.653980 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:00:33.653986 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:00:33.653992 | orchestrator | 2026-03-29 04:00:33.653998 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-29 04:00:33.654003 | orchestrator | Sunday 29 March 2026 04:00:23 +0000 (0:00:00.301) 0:00:05.256 ********** 2026-03-29 04:00:33.654010 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.654117 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:00:33.654125 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:00:33.654132 | orchestrator | 2026-03-29 04:00:33.654138 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 04:00:33.654145 | orchestrator | Sunday 29 March 2026 04:00:24 +0000 (0:00:00.579) 0:00:05.835 ********** 2026-03-29 04:00:33.654151 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654157 | orchestrator | 2026-03-29 04:00:33.654163 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 04:00:33.654169 | orchestrator | Sunday 29 March 2026 04:00:24 +0000 (0:00:00.247) 0:00:06.082 ********** 2026-03-29 04:00:33.654176 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654182 | orchestrator | 2026-03-29 04:00:33.654189 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 04:00:33.654195 | orchestrator | Sunday 29 March 2026 04:00:24 +0000 (0:00:00.260) 0:00:06.343 ********** 2026-03-29 04:00:33.654203 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654210 | orchestrator | 2026-03-29 04:00:33.654216 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654223 | orchestrator | Sunday 29 March 2026 04:00:24 +0000 (0:00:00.270) 0:00:06.613 ********** 2026-03-29 04:00:33.654230 | orchestrator | 2026-03-29 04:00:33.654237 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654243 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.072) 0:00:06.686 ********** 2026-03-29 04:00:33.654249 | orchestrator | 2026-03-29 04:00:33.654255 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654270 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.072) 0:00:06.759 ********** 2026-03-29 04:00:33.654277 | orchestrator | 2026-03-29 04:00:33.654283 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 04:00:33.654290 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.077) 0:00:06.837 ********** 2026-03-29 04:00:33.654296 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654303 | orchestrator | 2026-03-29 04:00:33.654309 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 04:00:33.654315 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.268) 0:00:07.105 ********** 2026-03-29 04:00:33.654322 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654328 | orchestrator | 2026-03-29 04:00:33.654351 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-29 04:00:33.654357 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.242) 0:00:07.348 ********** 2026-03-29 04:00:33.654363 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.654370 | orchestrator | 2026-03-29 04:00:33.654376 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-29 04:00:33.654382 | orchestrator | Sunday 29 March 2026 04:00:25 +0000 (0:00:00.129) 0:00:07.478 ********** 2026-03-29 04:00:33.654388 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:00:33.654395 | orchestrator | 2026-03-29 04:00:33.654401 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-29 04:00:33.654407 | orchestrator | Sunday 29 March 2026 04:00:27 +0000 (0:00:01.980) 0:00:09.458 ********** 2026-03-29 04:00:33.654413 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.654431 | orchestrator | 2026-03-29 04:00:33.654437 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-29 04:00:33.654450 | orchestrator | Sunday 29 March 2026 04:00:28 +0000 (0:00:00.468) 0:00:09.926 ********** 2026-03-29 04:00:33.654456 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.654463 | orchestrator | 2026-03-29 04:00:33.654469 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-29 04:00:33.654475 | orchestrator | Sunday 29 March 2026 04:00:28 +0000 (0:00:00.347) 0:00:10.273 ********** 2026-03-29 04:00:33.654482 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654488 | orchestrator | 2026-03-29 04:00:33.654494 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-29 04:00:33.654501 | orchestrator | Sunday 29 March 2026 04:00:28 +0000 (0:00:00.136) 0:00:10.410 ********** 2026-03-29 04:00:33.654507 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:00:33.654513 | orchestrator | 2026-03-29 04:00:33.654519 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 04:00:33.654526 | orchestrator | Sunday 29 March 2026 04:00:28 +0000 (0:00:00.162) 0:00:10.572 ********** 2026-03-29 04:00:33.654532 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.654538 | orchestrator | 2026-03-29 04:00:33.654544 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 04:00:33.654551 | orchestrator | Sunday 29 March 2026 04:00:29 +0000 (0:00:00.251) 0:00:10.824 ********** 2026-03-29 04:00:33.654557 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:00:33.654563 | orchestrator | 2026-03-29 04:00:33.654570 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 04:00:33.654689 | orchestrator | Sunday 29 March 2026 04:00:29 +0000 (0:00:00.314) 0:00:11.139 ********** 2026-03-29 04:00:33.654705 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.654709 | orchestrator | 2026-03-29 04:00:33.654713 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 04:00:33.654717 | orchestrator | Sunday 29 March 2026 04:00:30 +0000 (0:00:01.346) 0:00:12.485 ********** 2026-03-29 04:00:33.654721 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.654725 | orchestrator | 2026-03-29 04:00:33.654734 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 04:00:33.654738 | orchestrator | Sunday 29 March 2026 04:00:31 +0000 (0:00:00.274) 0:00:12.760 ********** 2026-03-29 04:00:33.654741 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.654745 | orchestrator | 2026-03-29 04:00:33.654749 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654753 | orchestrator | Sunday 29 March 2026 04:00:31 +0000 (0:00:00.255) 0:00:13.016 ********** 2026-03-29 04:00:33.654756 | orchestrator | 2026-03-29 04:00:33.654760 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654764 | orchestrator | Sunday 29 March 2026 04:00:31 +0000 (0:00:00.077) 0:00:13.093 ********** 2026-03-29 04:00:33.654768 | orchestrator | 2026-03-29 04:00:33.654771 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:00:33.654775 | orchestrator | Sunday 29 March 2026 04:00:31 +0000 (0:00:00.069) 0:00:13.162 ********** 2026-03-29 04:00:33.654779 | orchestrator | 2026-03-29 04:00:33.654782 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 04:00:33.654786 | orchestrator | Sunday 29 March 2026 04:00:31 +0000 (0:00:00.276) 0:00:13.439 ********** 2026-03-29 04:00:33.654790 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:33.654793 | orchestrator | 2026-03-29 04:00:33.654797 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 04:00:33.654801 | orchestrator | Sunday 29 March 2026 04:00:33 +0000 (0:00:01.468) 0:00:14.907 ********** 2026-03-29 04:00:33.654807 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 04:00:33.654811 | orchestrator |  "msg": [ 2026-03-29 04:00:33.654815 | orchestrator |  "Validator run completed.", 2026-03-29 04:00:33.654819 | orchestrator |  "You can find the report file here:", 2026-03-29 04:00:33.654823 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-29T04:00:19+00:00-report.json", 2026-03-29 04:00:33.654829 | orchestrator |  "on the following host:", 2026-03-29 04:00:33.654832 | orchestrator |  "testbed-manager" 2026-03-29 04:00:33.654837 | orchestrator |  ] 2026-03-29 04:00:33.654841 | orchestrator | } 2026-03-29 04:00:33.654844 | orchestrator | 2026-03-29 04:00:33.654848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:00:33.654853 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 04:00:33.654858 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:00:33.654870 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:00:33.999000 | orchestrator | 2026-03-29 04:00:33.999116 | orchestrator | 2026-03-29 04:00:33.999151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:00:33.999167 | orchestrator | Sunday 29 March 2026 04:00:33 +0000 (0:00:00.416) 0:00:15.324 ********** 2026-03-29 04:00:33.999178 | orchestrator | =============================================================================== 2026-03-29 04:00:33.999190 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.98s 2026-03-29 04:00:33.999201 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-03-29 04:00:33.999218 | orchestrator | Aggregate test results step one ----------------------------------------- 1.35s 2026-03-29 04:00:33.999237 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-03-29 04:00:33.999249 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-03-29 04:00:33.999262 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-03-29 04:00:33.999275 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.58s 2026-03-29 04:00:33.999318 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-03-29 04:00:33.999332 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2026-03-29 04:00:33.999345 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-03-29 04:00:33.999357 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-03-29 04:00:33.999368 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-03-29 04:00:33.999381 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-03-29 04:00:33.999394 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.31s 2026-03-29 04:00:33.999406 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-03-29 04:00:33.999418 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-03-29 04:00:33.999431 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-29 04:00:33.999444 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-03-29 04:00:33.999456 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-03-29 04:00:33.999469 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-03-29 04:00:34.336921 | orchestrator | + osism validate ceph-osds 2026-03-29 04:00:55.779706 | orchestrator | 2026-03-29 04:00:55.779803 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-29 04:00:55.779815 | orchestrator | 2026-03-29 04:00:55.779823 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 04:00:55.779832 | orchestrator | Sunday 29 March 2026 04:00:51 +0000 (0:00:00.438) 0:00:00.438 ********** 2026-03-29 04:00:55.779840 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:55.779847 | orchestrator | 2026-03-29 04:00:55.779855 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 04:00:55.779863 | orchestrator | Sunday 29 March 2026 04:00:51 +0000 (0:00:00.845) 0:00:01.284 ********** 2026-03-29 04:00:55.779870 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:55.779877 | orchestrator | 2026-03-29 04:00:55.779885 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 04:00:55.779892 | orchestrator | Sunday 29 March 2026 04:00:52 +0000 (0:00:00.555) 0:00:01.840 ********** 2026-03-29 04:00:55.779899 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:00:55.779907 | orchestrator | 2026-03-29 04:00:55.779914 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 04:00:55.779921 | orchestrator | Sunday 29 March 2026 04:00:53 +0000 (0:00:00.773) 0:00:02.613 ********** 2026-03-29 04:00:55.779928 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:00:55.779936 | orchestrator | 2026-03-29 04:00:55.779944 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 04:00:55.779952 | orchestrator | Sunday 29 March 2026 04:00:53 +0000 (0:00:00.125) 0:00:02.739 ********** 2026-03-29 04:00:55.779959 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:00:55.779966 | orchestrator | 2026-03-29 04:00:55.779974 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 04:00:55.779995 | orchestrator | Sunday 29 March 2026 04:00:53 +0000 (0:00:00.135) 0:00:02.874 ********** 2026-03-29 04:00:55.780003 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:00:55.780010 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:00:55.780018 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:00:55.780025 | orchestrator | 2026-03-29 04:00:55.780032 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 04:00:55.780039 | orchestrator | Sunday 29 March 2026 04:00:53 +0000 (0:00:00.330) 0:00:03.205 ********** 2026-03-29 04:00:55.780047 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:00:55.780070 | orchestrator | 2026-03-29 04:00:55.780078 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 04:00:55.780085 | orchestrator | Sunday 29 March 2026 04:00:53 +0000 (0:00:00.158) 0:00:03.363 ********** 2026-03-29 04:00:55.780092 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:00:55.780100 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:00:55.780107 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:00:55.780114 | orchestrator | 2026-03-29 04:00:55.780122 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-29 04:00:55.780129 | orchestrator | Sunday 29 March 2026 04:00:54 +0000 (0:00:00.344) 0:00:03.708 ********** 2026-03-29 04:00:55.780136 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:00:55.780143 | orchestrator | 2026-03-29 04:00:55.780151 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:00:55.780158 | orchestrator | Sunday 29 March 2026 04:00:55 +0000 (0:00:00.808) 0:00:04.516 ********** 2026-03-29 04:00:55.780165 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:00:55.780172 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:00:55.780180 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:00:55.780187 | orchestrator | 2026-03-29 04:00:55.780195 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-29 04:00:55.780202 | orchestrator | Sunday 29 March 2026 04:00:55 +0000 (0:00:00.343) 0:00:04.860 ********** 2026-03-29 04:00:55.780211 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd388da2308b94fed2f27241e10504190111a86d2ba825107380b2bd8b8ea9bd1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-29 04:00:55.780221 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d08505882d744af4c23e30426acd38f8432f6d36b7569e2b1458068fe19eefc', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:55.780230 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa69c886981c578068affc5d8c7e07f9b4efb6d9177663edc64cc9bc07fa622f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:55.780238 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10221f7f9217185a2aea1de77f17110a4863d56973ae38649433b97b853db71c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-29 04:00:55.780245 | orchestrator | skipping: [testbed-node-3] => (item={'id': '79a17cf52af0e51f07469cfe05f70838d1ce1c88ebafc6a24fb8b687edf179f8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-29 04:00:55.780274 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da353414420ca0e34e06f109a2ddc13a9018381c2aa07ae77c79ff44da964b5f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-29 04:00:55.780284 | orchestrator | skipping: [testbed-node-3] => (item={'id': '14ed5ee787a2ff9e0af9181640490fa1bf74105b929edf468a75fe4eeadaacd4', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-29 04:00:55.780293 | orchestrator | skipping: [testbed-node-3] => (item={'id': '693f0e4c024aa5290da4b4e135febe910fa56d76a5701df033d68451e50bd028', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-03-29 04:00:55.780302 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca468f2cb65df3e1b7ee7929b3fb0cb86b1b43ef0c67eafc51ac997ee09add9b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:55.780318 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ac2c74c086cf40e96c0497718da3f62dffb27ae2143333843689d9f4d3fece75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:55.780331 | orchestrator | skipping: [testbed-node-3] => (item={'id': '309da534f454721a74b72746cbfccc9e5c81a27e95d230c9624e35d4d1550e0c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:55.780342 | orchestrator | ok: [testbed-node-3] => (item={'id': 'bef2e631340040c9ec014e0910530e2078f28ce4433f82341c6fb0c30ad64646', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:00:55.780352 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ee2f5bd8313e30a6dadbd3d0a829c697bb0d0310e573dbc10049aabc0fa5d4b3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:00:55.780360 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea77f54e41e330dd7639bdc129b42bf203261d15852c1f2452dee3e5c1c61be7', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:55.780369 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4a7ba7cb828e7763bb7dcb0321206f1712feb719fa5319d5c1532a2fdf2c1de6', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:00:55.780378 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0fc1cbec2a73fbb116426c253124e7c678701d0284e61bf58aaa8683680839e2', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:00:55.780387 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f5c8e45832c98a9b94e8e80fd02eb9f1bf7f233455d03e1d5620baf427a471f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:55.780396 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4078475da36a7a17e3f7f6dfb3e0da369b96839f0344cafd72507b230dd4b194', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:55.780405 | orchestrator | skipping: [testbed-node-3] => (item={'id': '150aa9a07e0e4eda7b9690e10cf0e246a438e2efa6e35c56c53d2c2b782b5094', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:55.780414 | orchestrator | skipping: [testbed-node-4] => (item={'id': '134ed9f16a2a210b272bf258910b0afba3b18dfeec45bb89dae419d349cc053b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-29 04:00:55.780428 | orchestrator | skipping: [testbed-node-4] => (item={'id': '06fcce12acbf2f14e727d1102f98737c36e469669d5f91724d4213e37610022a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:56.055490 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a469d611e2ea5b36a3fdf87bb1f138067e1397d428407fc71741c1d87dfa9c97', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:56.055677 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4e1f6bb69710d95888a7a215982ef088ba657cd1aba9247042e9c7481a3c2316', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-29 04:00:56.055701 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a10811c446aeaddc66f8c533c0190926a69bd98406816848278ef26227a9e316', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-29 04:00:56.055715 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac039fa78fcd83039fe1a3f87345f1609d938f377c570307e6819e12161bed84', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-29 04:00:56.055726 | orchestrator | skipping: [testbed-node-4] => (item={'id': '55670bdd31472838a868861eb41b92ae70884d7cb84cd9e6135d6bcb6fddaa21', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-29 04:00:56.055786 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9460d1c312a0cb99b3e8e028bc741696b16c7a47b5095d8b84650c3f75148f23', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-03-29 04:00:56.055799 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'be3511f930dfddb65c85b1f87f094e060e2843031404b2f9b371d4402d8c9d5e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.055811 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b5b26a35049fb00404855adaf24be8dc193f5b52c95b7fa911ae691040aaac8a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.055823 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2dcd4792aeb44379f124c66e6c1741b94f983cef46815eceabcc2c7c66ecc93', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.055836 | orchestrator | ok: [testbed-node-4] => (item={'id': '21799bc7fd1f4db9aa88cc3f0102c88a4efd2fa17c81d18d684ac91be33b2874', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:00:56.055847 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd043c4a752f09c8c00ee345e815cfef1fe27761dff03fc467382e8fbb3bdbbfc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:00:56.055854 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4d81b2ffb6ab91e9613099384be9c3e87f2f3fe849236f545cc05a0d5b5dfccb', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.055866 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e8784c64445698ff4563f34ba07c2c3077643eedb1c0f152a181ebd706d3968', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:00:56.055877 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ad044e17549539a6943fc74681ca7b7ff940f6198802dc0f09f42241b118624b', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:00:56.055911 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd197e9cf08ee066f298d20c6330930b2a23f014ef3f54d32519233b265c428ce', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:56.055934 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ce2f23ea417c8bb5c6b1faa61a00fe6f5cb9192bee59fef34164d7c38a0cf2d0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:56.055944 | orchestrator | skipping: [testbed-node-4] => (item={'id': '99fdf96b3c06c08c5ae81cd28194b3f8d92b3c97e220cfaf86677946d6da924f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:00:56.055953 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9048e0e3a3a53e8f1fc44828c9d3adb9a5ae7123e85c1dfb904bf7ea58ecb84', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-29 04:00:56.055968 | orchestrator | skipping: [testbed-node-5] => (item={'id': '543697cc9c1e8b01aa80bf15722694210c85466232ca7d1290f07782915c1f1d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:56.055978 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5d58a113eb2f313563d0ac962f2d1ddffc64315b6f7090428d0872c0d645c4f8', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-29 04:00:56.055987 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4d7ddd66b0c6021a6cd3b123d9c10873ea82cc58fa7b3e8ed517557b5f896598', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-29 04:00:56.055996 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5e67dc0b893f0842ccbf23bff109f1dbd5b566770bd9b5314cb27e673bd43064', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-03-29 04:00:56.056005 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c6c22cf3fd356b6ffd1a4775f89f507e7c68cc3cddecbd5a7debf952ff482f46', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-29 04:00:56.056015 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf5dd36d3fac296230377a811b5182d5fd42188d82e950b7c8433c6c948fdbb1', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-29 04:00:56.056027 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85af789200c830bd305ca7c26870eb7f703a61180f6a171bc395d2f3f6d4f13d', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-03-29 04:00:56.056037 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9ef3960a904ba744e6530dc94170c13407f564b7816fa3a8eb5448d65758a158', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.056048 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ee1b36bb21b422f4c584b20697324061a2bac2e1b00dc71153cdb42b72d1d218', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.056060 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5161450b77443e8ba93c4c0921b6ad3cc90578707f9e7a6bfad0ed6a42f2df5d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:00:56.056078 | orchestrator | ok: [testbed-node-5] => (item={'id': '4eae2fe1bd3ac3e14cecd46902f06cf84b33a71ef967754261fde2a7c6954f88', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:00:56.056099 | orchestrator | ok: [testbed-node-5] => (item={'id': '8f611699f7dd2b99c28af68037837da12f881e40d970bb3b5284b269ca2b1ccd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-29 04:01:08.077586 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f908267d5c662a1f06b83eb5866f8151a2436057064c0bf3ad3ffacb40fef27', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-29 04:01:08.077665 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1de50533866f95bab11fbf802a0d7cd52d6ef7165d8af4484e4d8fa39794fa63', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:01:08.077673 | orchestrator | skipping: [testbed-node-5] => (item={'id': '84b569386bdcd97d5d622123a7c37089d101ee07bc1f2b01e0bbbcd4eeb9e7b6', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-29 04:01:08.077690 | orchestrator | skipping: [testbed-node-5] => (item={'id': '99f06139d2cce3a415aacb9a43149130940a78cb8ecc1e1b92ab6026aa1ca20a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:01:08.077696 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cc6b89251b1431986679b6ca6366c30f3d1bf39279cc9516b8a6840d8f4cc736', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:01:08.077700 | orchestrator | skipping: [testbed-node-5] => (item={'id': '387e9a22c46e2b2d9e7ed2c5ceaa948a662072a060b2835d8cc32141f406a621', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-29 04:01:08.077705 | orchestrator | 2026-03-29 04:01:08.077710 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-29 04:01:08.077715 | orchestrator | Sunday 29 March 2026 04:00:56 +0000 (0:00:00.549) 0:00:05.409 ********** 2026-03-29 04:01:08.077719 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.077723 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.077727 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.077731 | orchestrator | 2026-03-29 04:01:08.077735 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-29 04:01:08.077739 | orchestrator | Sunday 29 March 2026 04:00:56 +0000 (0:00:00.337) 0:00:05.747 ********** 2026-03-29 04:01:08.077743 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077747 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:08.077751 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:08.077755 | orchestrator | 2026-03-29 04:01:08.077759 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-29 04:01:08.077763 | orchestrator | Sunday 29 March 2026 04:00:56 +0000 (0:00:00.507) 0:00:06.254 ********** 2026-03-29 04:01:08.077767 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.077770 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.077774 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.077778 | orchestrator | 2026-03-29 04:01:08.077782 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:01:08.077786 | orchestrator | Sunday 29 March 2026 04:00:57 +0000 (0:00:00.328) 0:00:06.582 ********** 2026-03-29 04:01:08.077802 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.077806 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.077810 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.077813 | orchestrator | 2026-03-29 04:01:08.077817 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-29 04:01:08.077821 | orchestrator | Sunday 29 March 2026 04:00:57 +0000 (0:00:00.325) 0:00:06.908 ********** 2026-03-29 04:01:08.077825 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-29 04:01:08.077830 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-29 04:01:08.077834 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-29 04:01:08.077841 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-29 04:01:08.077845 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:08.077849 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-29 04:01:08.077853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-29 04:01:08.077856 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:08.077860 | orchestrator | 2026-03-29 04:01:08.077864 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-29 04:01:08.077868 | orchestrator | Sunday 29 March 2026 04:00:57 +0000 (0:00:00.359) 0:00:07.267 ********** 2026-03-29 04:01:08.077872 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.077875 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.077879 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.077883 | orchestrator | 2026-03-29 04:01:08.077887 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 04:01:08.077891 | orchestrator | Sunday 29 March 2026 04:00:58 +0000 (0:00:00.560) 0:00:07.827 ********** 2026-03-29 04:01:08.077894 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077908 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:08.077912 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:08.077916 | orchestrator | 2026-03-29 04:01:08.077920 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 04:01:08.077923 | orchestrator | Sunday 29 March 2026 04:00:58 +0000 (0:00:00.330) 0:00:08.157 ********** 2026-03-29 04:01:08.077927 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:08.077935 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:08.077939 | orchestrator | 2026-03-29 04:01:08.077942 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-29 04:01:08.077946 | orchestrator | Sunday 29 March 2026 04:00:59 +0000 (0:00:00.320) 0:00:08.478 ********** 2026-03-29 04:01:08.077950 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.077954 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.077957 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.077961 | orchestrator | 2026-03-29 04:01:08.077965 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 04:01:08.077968 | orchestrator | Sunday 29 March 2026 04:00:59 +0000 (0:00:00.335) 0:00:08.814 ********** 2026-03-29 04:01:08.077972 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077976 | orchestrator | 2026-03-29 04:01:08.077980 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 04:01:08.077983 | orchestrator | Sunday 29 March 2026 04:01:00 +0000 (0:00:00.709) 0:00:09.524 ********** 2026-03-29 04:01:08.077990 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.077994 | orchestrator | 2026-03-29 04:01:08.077997 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 04:01:08.078001 | orchestrator | Sunday 29 March 2026 04:01:00 +0000 (0:00:00.310) 0:00:09.834 ********** 2026-03-29 04:01:08.078009 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.078032 | orchestrator | 2026-03-29 04:01:08.078037 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:08.078040 | orchestrator | Sunday 29 March 2026 04:01:00 +0000 (0:00:00.304) 0:00:10.138 ********** 2026-03-29 04:01:08.078045 | orchestrator | 2026-03-29 04:01:08.078048 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:08.078053 | orchestrator | Sunday 29 March 2026 04:01:00 +0000 (0:00:00.084) 0:00:10.223 ********** 2026-03-29 04:01:08.078057 | orchestrator | 2026-03-29 04:01:08.078061 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:08.078065 | orchestrator | Sunday 29 March 2026 04:01:00 +0000 (0:00:00.073) 0:00:10.297 ********** 2026-03-29 04:01:08.078069 | orchestrator | 2026-03-29 04:01:08.078072 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 04:01:08.078076 | orchestrator | Sunday 29 March 2026 04:01:01 +0000 (0:00:00.074) 0:00:10.372 ********** 2026-03-29 04:01:08.078080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.078084 | orchestrator | 2026-03-29 04:01:08.078087 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-29 04:01:08.078091 | orchestrator | Sunday 29 March 2026 04:01:01 +0000 (0:00:00.257) 0:00:10.629 ********** 2026-03-29 04:01:08.078095 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.078099 | orchestrator | 2026-03-29 04:01:08.078102 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:01:08.078106 | orchestrator | Sunday 29 March 2026 04:01:01 +0000 (0:00:00.250) 0:00:10.879 ********** 2026-03-29 04:01:08.078110 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078114 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.078117 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.078121 | orchestrator | 2026-03-29 04:01:08.078125 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-29 04:01:08.078130 | orchestrator | Sunday 29 March 2026 04:01:01 +0000 (0:00:00.299) 0:00:11.179 ********** 2026-03-29 04:01:08.078134 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078139 | orchestrator | 2026-03-29 04:01:08.078144 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-29 04:01:08.078148 | orchestrator | Sunday 29 March 2026 04:01:02 +0000 (0:00:00.728) 0:00:11.908 ********** 2026-03-29 04:01:08.078153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:01:08.078158 | orchestrator | 2026-03-29 04:01:08.078162 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-29 04:01:08.078167 | orchestrator | Sunday 29 March 2026 04:01:04 +0000 (0:00:01.704) 0:00:13.612 ********** 2026-03-29 04:01:08.078171 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078176 | orchestrator | 2026-03-29 04:01:08.078180 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-29 04:01:08.078187 | orchestrator | Sunday 29 March 2026 04:01:04 +0000 (0:00:00.145) 0:00:13.757 ********** 2026-03-29 04:01:08.078193 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078200 | orchestrator | 2026-03-29 04:01:08.078207 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-29 04:01:08.078217 | orchestrator | Sunday 29 March 2026 04:01:04 +0000 (0:00:00.327) 0:00:14.085 ********** 2026-03-29 04:01:08.078281 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:08.078287 | orchestrator | 2026-03-29 04:01:08.078294 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-29 04:01:08.078300 | orchestrator | Sunday 29 March 2026 04:01:04 +0000 (0:00:00.117) 0:00:14.203 ********** 2026-03-29 04:01:08.078307 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078313 | orchestrator | 2026-03-29 04:01:08.078319 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:01:08.078326 | orchestrator | Sunday 29 March 2026 04:01:04 +0000 (0:00:00.149) 0:00:14.352 ********** 2026-03-29 04:01:08.078332 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:08.078344 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:08.078351 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:08.078357 | orchestrator | 2026-03-29 04:01:08.078364 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-29 04:01:08.078370 | orchestrator | Sunday 29 March 2026 04:01:05 +0000 (0:00:00.324) 0:00:14.677 ********** 2026-03-29 04:01:08.078377 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:01:08.078383 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:01:08.078390 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:01:18.735953 | orchestrator | 2026-03-29 04:01:18.736041 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-29 04:01:18.736050 | orchestrator | Sunday 29 March 2026 04:01:08 +0000 (0:00:02.747) 0:00:17.424 ********** 2026-03-29 04:01:18.736057 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736065 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736071 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736077 | orchestrator | 2026-03-29 04:01:18.736083 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-29 04:01:18.736089 | orchestrator | Sunday 29 March 2026 04:01:08 +0000 (0:00:00.336) 0:00:17.760 ********** 2026-03-29 04:01:18.736095 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736102 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736108 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736114 | orchestrator | 2026-03-29 04:01:18.736120 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-29 04:01:18.736127 | orchestrator | Sunday 29 March 2026 04:01:08 +0000 (0:00:00.513) 0:00:18.274 ********** 2026-03-29 04:01:18.736133 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:18.736140 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:18.736146 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:18.736152 | orchestrator | 2026-03-29 04:01:18.736158 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-29 04:01:18.736165 | orchestrator | Sunday 29 March 2026 04:01:09 +0000 (0:00:00.308) 0:00:18.582 ********** 2026-03-29 04:01:18.736172 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736178 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736185 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736192 | orchestrator | 2026-03-29 04:01:18.736199 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-29 04:01:18.736206 | orchestrator | Sunday 29 March 2026 04:01:09 +0000 (0:00:00.568) 0:00:19.151 ********** 2026-03-29 04:01:18.736212 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:18.736219 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:18.736257 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:18.736266 | orchestrator | 2026-03-29 04:01:18.736272 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-29 04:01:18.736279 | orchestrator | Sunday 29 March 2026 04:01:10 +0000 (0:00:00.348) 0:00:19.500 ********** 2026-03-29 04:01:18.736286 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:18.736293 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:18.736300 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:18.736306 | orchestrator | 2026-03-29 04:01:18.736313 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 04:01:18.736320 | orchestrator | Sunday 29 March 2026 04:01:10 +0000 (0:00:00.303) 0:00:19.803 ********** 2026-03-29 04:01:18.736327 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736334 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736340 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736347 | orchestrator | 2026-03-29 04:01:18.736354 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-29 04:01:18.736361 | orchestrator | Sunday 29 March 2026 04:01:10 +0000 (0:00:00.496) 0:00:20.300 ********** 2026-03-29 04:01:18.736368 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736374 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736380 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736406 | orchestrator | 2026-03-29 04:01:18.736415 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-29 04:01:18.736421 | orchestrator | Sunday 29 March 2026 04:01:11 +0000 (0:00:00.815) 0:00:21.115 ********** 2026-03-29 04:01:18.736427 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736433 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736439 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736445 | orchestrator | 2026-03-29 04:01:18.736451 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-29 04:01:18.736458 | orchestrator | Sunday 29 March 2026 04:01:12 +0000 (0:00:00.325) 0:00:21.441 ********** 2026-03-29 04:01:18.736464 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:18.736470 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:01:18.736476 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:01:18.736482 | orchestrator | 2026-03-29 04:01:18.736489 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-29 04:01:18.736495 | orchestrator | Sunday 29 March 2026 04:01:12 +0000 (0:00:00.301) 0:00:21.742 ********** 2026-03-29 04:01:18.736502 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:01:18.736508 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:01:18.736515 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:01:18.736521 | orchestrator | 2026-03-29 04:01:18.736528 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 04:01:18.736535 | orchestrator | Sunday 29 March 2026 04:01:12 +0000 (0:00:00.562) 0:00:22.305 ********** 2026-03-29 04:01:18.736542 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:01:18.736549 | orchestrator | 2026-03-29 04:01:18.736599 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 04:01:18.736607 | orchestrator | Sunday 29 March 2026 04:01:13 +0000 (0:00:00.306) 0:00:22.611 ********** 2026-03-29 04:01:18.736613 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:01:18.736620 | orchestrator | 2026-03-29 04:01:18.736627 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 04:01:18.736634 | orchestrator | Sunday 29 March 2026 04:01:13 +0000 (0:00:00.290) 0:00:22.902 ********** 2026-03-29 04:01:18.736641 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:01:18.736647 | orchestrator | 2026-03-29 04:01:18.736654 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 04:01:18.736661 | orchestrator | Sunday 29 March 2026 04:01:15 +0000 (0:00:01.749) 0:00:24.652 ********** 2026-03-29 04:01:18.736667 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:01:18.736674 | orchestrator | 2026-03-29 04:01:18.736680 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 04:01:18.736687 | orchestrator | Sunday 29 March 2026 04:01:15 +0000 (0:00:00.284) 0:00:24.936 ********** 2026-03-29 04:01:18.736693 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:01:18.736699 | orchestrator | 2026-03-29 04:01:18.736722 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:18.736730 | orchestrator | Sunday 29 March 2026 04:01:15 +0000 (0:00:00.280) 0:00:25.217 ********** 2026-03-29 04:01:18.736737 | orchestrator | 2026-03-29 04:01:18.736743 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:18.736749 | orchestrator | Sunday 29 March 2026 04:01:15 +0000 (0:00:00.074) 0:00:25.291 ********** 2026-03-29 04:01:18.736756 | orchestrator | 2026-03-29 04:01:18.736762 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 04:01:18.736769 | orchestrator | Sunday 29 March 2026 04:01:15 +0000 (0:00:00.071) 0:00:25.362 ********** 2026-03-29 04:01:18.736775 | orchestrator | 2026-03-29 04:01:18.736782 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 04:01:18.736788 | orchestrator | Sunday 29 March 2026 04:01:16 +0000 (0:00:00.074) 0:00:25.437 ********** 2026-03-29 04:01:18.736795 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 04:01:18.736809 | orchestrator | 2026-03-29 04:01:18.736816 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 04:01:18.736823 | orchestrator | Sunday 29 March 2026 04:01:17 +0000 (0:00:01.642) 0:00:27.079 ********** 2026-03-29 04:01:18.736829 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-29 04:01:18.736836 | orchestrator |  "msg": [ 2026-03-29 04:01:18.736842 | orchestrator |  "Validator run completed.", 2026-03-29 04:01:18.736855 | orchestrator |  "You can find the report file here:", 2026-03-29 04:01:18.736861 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-29T04:00:51+00:00-report.json", 2026-03-29 04:01:18.736868 | orchestrator |  "on the following host:", 2026-03-29 04:01:18.736872 | orchestrator |  "testbed-manager" 2026-03-29 04:01:18.736876 | orchestrator |  ] 2026-03-29 04:01:18.736880 | orchestrator | } 2026-03-29 04:01:18.736885 | orchestrator | 2026-03-29 04:01:18.736888 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:01:18.736893 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 04:01:18.736899 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 04:01:18.736903 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 04:01:18.736907 | orchestrator | 2026-03-29 04:01:18.736911 | orchestrator | 2026-03-29 04:01:18.736915 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:01:18.736918 | orchestrator | Sunday 29 March 2026 04:01:18 +0000 (0:00:00.662) 0:00:27.742 ********** 2026-03-29 04:01:18.736922 | orchestrator | =============================================================================== 2026-03-29 04:01:18.736926 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.75s 2026-03-29 04:01:18.736930 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-03-29 04:01:18.736934 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.70s 2026-03-29 04:01:18.736937 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-03-29 04:01:18.736941 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-29 04:01:18.736945 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.82s 2026-03-29 04:01:18.736949 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.81s 2026-03-29 04:01:18.736953 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-03-29 04:01:18.736956 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.73s 2026-03-29 04:01:18.736960 | orchestrator | Aggregate test results step one ----------------------------------------- 0.71s 2026-03-29 04:01:18.736964 | orchestrator | Print report file information ------------------------------------------- 0.66s 2026-03-29 04:01:18.736968 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.57s 2026-03-29 04:01:18.736972 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.56s 2026-03-29 04:01:18.736975 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2026-03-29 04:01:18.736979 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.56s 2026-03-29 04:01:18.736983 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.55s 2026-03-29 04:01:18.736987 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-03-29 04:01:18.736991 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2026-03-29 04:01:18.736994 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-03-29 04:01:18.737004 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.36s 2026-03-29 04:01:19.101667 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-29 04:01:19.109225 | orchestrator | + set -e 2026-03-29 04:01:19.109289 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 04:01:19.111894 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 04:01:19.111948 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 04:01:19.111957 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 04:01:19.111964 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 04:01:19.111971 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 04:01:19.111979 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 04:01:19.111986 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 04:01:19.111992 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 04:01:19.111999 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 04:01:19.112005 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 04:01:19.112012 | orchestrator | ++ export ARA=false 2026-03-29 04:01:19.112019 | orchestrator | ++ ARA=false 2026-03-29 04:01:19.112026 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 04:01:19.112032 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 04:01:19.112039 | orchestrator | ++ export TEMPEST=false 2026-03-29 04:01:19.112046 | orchestrator | ++ TEMPEST=false 2026-03-29 04:01:19.112053 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 04:01:19.112060 | orchestrator | ++ IS_ZUUL=true 2026-03-29 04:01:19.112066 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:01:19.112073 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:01:19.112079 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 04:01:19.112086 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 04:01:19.112092 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 04:01:19.112099 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 04:01:19.112106 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 04:01:19.112113 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 04:01:19.112119 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 04:01:19.112126 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 04:01:19.112132 | orchestrator | + source /etc/os-release 2026-03-29 04:01:19.112139 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-29 04:01:19.112145 | orchestrator | ++ NAME=Ubuntu 2026-03-29 04:01:19.112152 | orchestrator | ++ VERSION_ID=24.04 2026-03-29 04:01:19.112158 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-29 04:01:19.112165 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-29 04:01:19.112171 | orchestrator | ++ ID=ubuntu 2026-03-29 04:01:19.112178 | orchestrator | ++ ID_LIKE=debian 2026-03-29 04:01:19.112184 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-29 04:01:19.112191 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-29 04:01:19.112198 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-29 04:01:19.112204 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-29 04:01:19.112212 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-29 04:01:19.112219 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-29 04:01:19.112225 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-29 04:01:19.112250 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-29 04:01:19.112269 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 04:01:19.142085 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 04:01:43.326864 | orchestrator | 2026-03-29 04:01:43.326949 | orchestrator | # Status of Elasticsearch 2026-03-29 04:01:43.326960 | orchestrator | 2026-03-29 04:01:43.326968 | orchestrator | + pushd /opt/configuration/contrib 2026-03-29 04:01:43.326976 | orchestrator | + echo 2026-03-29 04:01:43.326983 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-29 04:01:43.326990 | orchestrator | + echo 2026-03-29 04:01:43.326997 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-29 04:01:43.527149 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-29 04:01:43.527233 | orchestrator | 2026-03-29 04:01:43.527242 | orchestrator | # Status of MariaDB 2026-03-29 04:01:43.527278 | orchestrator | + echo 2026-03-29 04:01:43.527284 | orchestrator | + echo '# Status of MariaDB' 2026-03-29 04:01:43.527289 | orchestrator | 2026-03-29 04:01:43.527294 | orchestrator | + echo 2026-03-29 04:01:43.527932 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 04:01:43.574242 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 04:01:43.574315 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 04:01:43.574328 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-29 04:01:43.574339 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-29 04:01:43.636998 | orchestrator | Reading package lists... 2026-03-29 04:01:44.041928 | orchestrator | Building dependency tree... 2026-03-29 04:01:44.042929 | orchestrator | Reading state information... 2026-03-29 04:01:44.499042 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-29 04:01:44.499119 | orchestrator | bc set to manually installed. 2026-03-29 04:01:44.499127 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-29 04:01:45.197343 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-29 04:01:45.197411 | orchestrator | 2026-03-29 04:01:45.197418 | orchestrator | # Status of Prometheus 2026-03-29 04:01:45.197432 | orchestrator | 2026-03-29 04:01:45.197437 | orchestrator | + echo 2026-03-29 04:01:45.197447 | orchestrator | + echo '# Status of Prometheus' 2026-03-29 04:01:45.197451 | orchestrator | + echo 2026-03-29 04:01:45.197455 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-29 04:01:45.273768 | orchestrator | Unauthorized 2026-03-29 04:01:45.277186 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-29 04:01:45.327094 | orchestrator | Unauthorized 2026-03-29 04:01:45.331503 | orchestrator | 2026-03-29 04:01:45.331636 | orchestrator | # Status of RabbitMQ 2026-03-29 04:01:45.331647 | orchestrator | 2026-03-29 04:01:45.331655 | orchestrator | + echo 2026-03-29 04:01:45.331661 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-29 04:01:45.331668 | orchestrator | + echo 2026-03-29 04:01:45.332062 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 04:01:45.386912 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 04:01:45.386977 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 04:01:45.386984 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-29 04:01:45.853613 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-29 04:01:45.861960 | orchestrator | 2026-03-29 04:01:45.862086 | orchestrator | # Status of Redis 2026-03-29 04:01:45.862100 | orchestrator | 2026-03-29 04:01:45.862110 | orchestrator | + echo 2026-03-29 04:01:45.862121 | orchestrator | + echo '# Status of Redis' 2026-03-29 04:01:45.862131 | orchestrator | + echo 2026-03-29 04:01:45.862143 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-29 04:01:45.869028 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001753s;;;0.000000;10.000000 2026-03-29 04:01:45.869128 | orchestrator | 2026-03-29 04:01:45.869142 | orchestrator | # Create backup of MariaDB database 2026-03-29 04:01:45.869152 | orchestrator | 2026-03-29 04:01:45.869162 | orchestrator | + popd 2026-03-29 04:01:45.869171 | orchestrator | + echo 2026-03-29 04:01:45.869180 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-29 04:01:45.869189 | orchestrator | + echo 2026-03-29 04:01:45.869199 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-29 04:01:47.922582 | orchestrator | 2026-03-29 04:01:47 | INFO  | Task 090be178-2a9d-40e5-a607-35f588205959 (mariadb_backup) was prepared for execution. 2026-03-29 04:01:47.922655 | orchestrator | 2026-03-29 04:01:47 | INFO  | It takes a moment until task 090be178-2a9d-40e5-a607-35f588205959 (mariadb_backup) has been started and output is visible here. 2026-03-29 04:02:19.286393 | orchestrator | 2026-03-29 04:02:19.286491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:02:19.286502 | orchestrator | 2026-03-29 04:02:19.286509 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:02:19.286516 | orchestrator | Sunday 29 March 2026 04:01:52 +0000 (0:00:00.182) 0:00:00.182 ********** 2026-03-29 04:02:19.286600 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:02:19.286611 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:02:19.286617 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:02:19.286648 | orchestrator | 2026-03-29 04:02:19.286654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:02:19.286661 | orchestrator | Sunday 29 March 2026 04:01:52 +0000 (0:00:00.331) 0:00:00.514 ********** 2026-03-29 04:02:19.286667 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 04:02:19.286674 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 04:02:19.286682 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 04:02:19.286686 | orchestrator | 2026-03-29 04:02:19.286690 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 04:02:19.286694 | orchestrator | 2026-03-29 04:02:19.286698 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 04:02:19.286702 | orchestrator | Sunday 29 March 2026 04:01:53 +0000 (0:00:00.616) 0:00:01.131 ********** 2026-03-29 04:02:19.286706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:02:19.286710 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 04:02:19.286714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 04:02:19.286718 | orchestrator | 2026-03-29 04:02:19.286722 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 04:02:19.286726 | orchestrator | Sunday 29 March 2026 04:01:53 +0000 (0:00:00.423) 0:00:01.554 ********** 2026-03-29 04:02:19.286751 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:02:19.286756 | orchestrator | 2026-03-29 04:02:19.286760 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-29 04:02:19.286764 | orchestrator | Sunday 29 March 2026 04:01:54 +0000 (0:00:00.559) 0:00:02.114 ********** 2026-03-29 04:02:19.286768 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:02:19.286772 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:02:19.286776 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:02:19.286779 | orchestrator | 2026-03-29 04:02:19.286783 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-29 04:02:19.286787 | orchestrator | Sunday 29 March 2026 04:01:57 +0000 (0:00:03.474) 0:00:05.588 ********** 2026-03-29 04:02:19.286791 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 04:02:19.286794 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-29 04:02:19.286799 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 04:02:19.286806 | orchestrator | mariadb_bootstrap_restart 2026-03-29 04:02:19.286812 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:02:19.286818 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:02:19.286824 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:02:19.286830 | orchestrator | 2026-03-29 04:02:19.286836 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 04:02:19.286843 | orchestrator | skipping: no hosts matched 2026-03-29 04:02:19.286849 | orchestrator | 2026-03-29 04:02:19.286855 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 04:02:19.286862 | orchestrator | skipping: no hosts matched 2026-03-29 04:02:19.286867 | orchestrator | 2026-03-29 04:02:19.286874 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 04:02:19.286880 | orchestrator | skipping: no hosts matched 2026-03-29 04:02:19.286886 | orchestrator | 2026-03-29 04:02:19.286892 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 04:02:19.286899 | orchestrator | 2026-03-29 04:02:19.286905 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 04:02:19.286912 | orchestrator | Sunday 29 March 2026 04:02:18 +0000 (0:00:20.421) 0:00:26.009 ********** 2026-03-29 04:02:19.286919 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:02:19.286926 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:02:19.286933 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:02:19.286954 | orchestrator | 2026-03-29 04:02:19.286963 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 04:02:19.286969 | orchestrator | Sunday 29 March 2026 04:02:18 +0000 (0:00:00.317) 0:00:26.326 ********** 2026-03-29 04:02:19.286976 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:02:19.286983 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:02:19.286989 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:02:19.286996 | orchestrator | 2026-03-29 04:02:19.287002 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:02:19.287010 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:02:19.287019 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 04:02:19.287026 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 04:02:19.287033 | orchestrator | 2026-03-29 04:02:19.287039 | orchestrator | 2026-03-29 04:02:19.287045 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:02:19.287052 | orchestrator | Sunday 29 March 2026 04:02:18 +0000 (0:00:00.408) 0:00:26.735 ********** 2026-03-29 04:02:19.287059 | orchestrator | =============================================================================== 2026-03-29 04:02:19.287065 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 20.42s 2026-03-29 04:02:19.287091 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.47s 2026-03-29 04:02:19.287098 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-29 04:02:19.287105 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-03-29 04:02:19.287111 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-03-29 04:02:19.287118 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-03-29 04:02:19.287124 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-29 04:02:19.287131 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-03-29 04:02:19.628452 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-29 04:02:19.638920 | orchestrator | + set -e 2026-03-29 04:02:19.639005 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:02:19.639768 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:02:19.639796 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:02:19.639803 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:02:19.639864 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:02:19.639882 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 04:02:19.642095 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:02:19.647236 | orchestrator | 2026-03-29 04:02:19.647307 | orchestrator | # OpenStack endpoints 2026-03-29 04:02:19.647320 | orchestrator | 2026-03-29 04:02:19.647329 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 04:02:19.647338 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 04:02:19.647348 | orchestrator | + export OS_CLOUD=admin 2026-03-29 04:02:19.647356 | orchestrator | + OS_CLOUD=admin 2026-03-29 04:02:19.647365 | orchestrator | + echo 2026-03-29 04:02:19.647374 | orchestrator | + echo '# OpenStack endpoints' 2026-03-29 04:02:19.647382 | orchestrator | + echo 2026-03-29 04:02:19.647391 | orchestrator | + openstack endpoint list 2026-03-29 04:02:22.820227 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 04:02:22.820331 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-29 04:02:22.820341 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 04:02:22.820366 | orchestrator | | 0703aac5173f45c28ef97b2883a9e3c7 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-29 04:02:22.820373 | orchestrator | | 08efd74ef8c84e628d6bff7ae40a9154 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-29 04:02:22.820378 | orchestrator | | 0a7ce666b1c343cab898283748cc746d | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-29 04:02:22.820384 | orchestrator | | 1b6b7d90172b493ca83a1790c724f81c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-29 04:02:22.820390 | orchestrator | | 2cbca1b0b2ab4fb785732bb962250af7 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-29 04:02:22.820395 | orchestrator | | 3ee53fc11c4b48988978d5d1f6f5072b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-29 04:02:22.820401 | orchestrator | | 43e4e03e509e420fab1a07eeabdeed26 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 04:02:22.820407 | orchestrator | | 470fcd125ef64efd84d2481eed89d283 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-29 04:02:22.820412 | orchestrator | | 64df1c33f13040028212f5bd8e3d4b57 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-29 04:02:22.820418 | orchestrator | | 69423418952e433c881d554e5a04ad1d | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 04:02:22.820438 | orchestrator | | 6c3ef42f60fe4541a1479aa1b40b9053 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-29 04:02:22.820448 | orchestrator | | 6d48e580b85a406e8076efbaa7f7fd71 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-29 04:02:22.820456 | orchestrator | | 721566b04d1843fba98a6bf556b7e707 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-29 04:02:22.820465 | orchestrator | | 786e27d04cb04672bd1522823d4b05fc | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-29 04:02:22.820473 | orchestrator | | 79bacb2394184d5c8f19e97d2f1e40ec | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 04:02:22.820482 | orchestrator | | a57e80c8e92049fba18d36909a3ec473 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-29 04:02:22.820490 | orchestrator | | a6c075d93fa34519bc4407ab1de0083a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-29 04:02:22.820498 | orchestrator | | ab90223222f047089455b8079770fd51 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-29 04:02:22.820505 | orchestrator | | afa5ae4344804e2dbca18b495fc9cdf0 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 04:02:22.820514 | orchestrator | | b760628d2dee48cdbb11201fbf2c63a2 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-29 04:02:22.820620 | orchestrator | | bd841d71354b4e4fbbb9ae04e3557c5b | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-29 04:02:22.820638 | orchestrator | | c27e3089bacb4250b3754c4225d89290 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-29 04:02:22.820647 | orchestrator | | c3c883d4b4a046c0afefbcc17bee5e0e | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-29 04:02:22.820656 | orchestrator | | c7906700d9334c60b1523e5a12c6e780 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-29 04:02:22.820665 | orchestrator | | d08ecd71241343c9a38a4f4125d423ce | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-29 04:02:22.820677 | orchestrator | | dd179f7b4b18461c8b34c1d98c8591da | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-29 04:02:22.820685 | orchestrator | | e1085bf9060d470c9918bd8965174047 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-29 04:02:22.820694 | orchestrator | | e76c1f4e1ab449ae8fdff4b31c5f3705 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-29 04:02:22.820703 | orchestrator | | eee836ecca0a4cf386dfb129f4d1755e | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-29 04:02:22.820711 | orchestrator | | fb4a52b130bd4b4b9fb19a3670e142a5 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-29 04:02:22.820720 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 04:02:23.094719 | orchestrator | 2026-03-29 04:02:23.094793 | orchestrator | # Cinder 2026-03-29 04:02:23.094800 | orchestrator | 2026-03-29 04:02:23.094804 | orchestrator | + echo 2026-03-29 04:02:23.094809 | orchestrator | + echo '# Cinder' 2026-03-29 04:02:23.094813 | orchestrator | + echo 2026-03-29 04:02:23.094817 | orchestrator | + openstack volume service list 2026-03-29 04:02:25.792929 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:25.793077 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 04:02:25.793103 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:25.793121 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T04:02:19.000000 | 2026-03-29 04:02:25.793186 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T04:02:19.000000 | 2026-03-29 04:02:25.793206 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T04:02:20.000000 | 2026-03-29 04:02:25.793223 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-29T04:02:19.000000 | 2026-03-29 04:02:25.793241 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-29T04:02:16.000000 | 2026-03-29 04:02:25.793258 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-29T04:02:17.000000 | 2026-03-29 04:02:25.793275 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-29T04:02:23.000000 | 2026-03-29 04:02:25.793291 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-29T04:02:25.000000 | 2026-03-29 04:02:25.793308 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-29T04:02:25.000000 | 2026-03-29 04:02:25.793352 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:26.072181 | orchestrator | 2026-03-29 04:02:26.072260 | orchestrator | # Neutron 2026-03-29 04:02:26.072270 | orchestrator | 2026-03-29 04:02:26.072277 | orchestrator | + echo 2026-03-29 04:02:26.072284 | orchestrator | + echo '# Neutron' 2026-03-29 04:02:26.072292 | orchestrator | + echo 2026-03-29 04:02:26.072298 | orchestrator | + openstack network agent list 2026-03-29 04:02:28.775990 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 04:02:28.776085 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-29 04:02:28.776099 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 04:02:28.776108 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776118 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776127 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776156 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776165 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776175 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-29 04:02:28.776191 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 04:02:28.776204 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 04:02:28.776218 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 04:02:28.776233 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 04:02:29.096910 | orchestrator | + openstack network service provider list 2026-03-29 04:02:31.670399 | orchestrator | +---------------+------+---------+ 2026-03-29 04:02:31.670509 | orchestrator | | Service Type | Name | Default | 2026-03-29 04:02:31.670606 | orchestrator | +---------------+------+---------+ 2026-03-29 04:02:31.670615 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-29 04:02:31.670623 | orchestrator | +---------------+------+---------+ 2026-03-29 04:02:31.917509 | orchestrator | 2026-03-29 04:02:31.917613 | orchestrator | # Nova 2026-03-29 04:02:31.917624 | orchestrator | 2026-03-29 04:02:31.917630 | orchestrator | + echo 2026-03-29 04:02:31.917637 | orchestrator | + echo '# Nova' 2026-03-29 04:02:31.917643 | orchestrator | + echo 2026-03-29 04:02:31.917660 | orchestrator | + openstack compute service list 2026-03-29 04:02:34.588698 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:34.588798 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 04:02:34.588810 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:34.588819 | orchestrator | | 20759168-6d69-48b4-abed-16d4e2406c78 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T04:02:32.000000 | 2026-03-29 04:02:34.588853 | orchestrator | | 6b7724d1-7299-4468-9cc8-2b1af3672533 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T04:02:26.000000 | 2026-03-29 04:02:34.588862 | orchestrator | | 49e8417d-fda5-4288-824e-bdb1f52455a7 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T04:02:26.000000 | 2026-03-29 04:02:34.588870 | orchestrator | | 570055b1-70a9-402b-8ebd-e24490cb95bc | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-29T04:02:27.000000 | 2026-03-29 04:02:34.588878 | orchestrator | | f0040c09-4038-491c-b3b3-ccf6e4e23818 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-29T04:02:28.000000 | 2026-03-29 04:02:34.588886 | orchestrator | | 1029c571-12e7-4ba2-b435-ffac00c1097c | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-29T04:02:29.000000 | 2026-03-29 04:02:34.588893 | orchestrator | | 2723fb5c-105f-4589-b36a-3daca461ba03 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-29T04:02:29.000000 | 2026-03-29 04:02:34.588901 | orchestrator | | baa16fcf-c3d4-48c6-9c89-c792ebe09937 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-29T04:02:30.000000 | 2026-03-29 04:02:34.588908 | orchestrator | | 24e116f5-4726-4996-ab22-28bf50bc59ce | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-29T04:02:30.000000 | 2026-03-29 04:02:34.588915 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 04:02:34.852287 | orchestrator | + openstack hypervisor list 2026-03-29 04:02:37.531224 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 04:02:37.531369 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-29 04:02:37.531388 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 04:02:37.531401 | orchestrator | | 192e194a-bde7-46f4-9850-dab79cfc2bc5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-29 04:02:37.531413 | orchestrator | | b43e64af-4578-4b70-9523-c0bb973c3af6 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-29 04:02:37.531424 | orchestrator | | 961d44bb-c34e-4fe7-95ba-6b62151fa1a0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-29 04:02:37.531452 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 04:02:37.791276 | orchestrator | 2026-03-29 04:02:37.791386 | orchestrator | # Run OpenStack test play 2026-03-29 04:02:37.791403 | orchestrator | 2026-03-29 04:02:37.791416 | orchestrator | + echo 2026-03-29 04:02:37.791428 | orchestrator | + echo '# Run OpenStack test play' 2026-03-29 04:02:37.791445 | orchestrator | + echo 2026-03-29 04:02:37.791457 | orchestrator | + osism apply --environment openstack test 2026-03-29 04:02:39.752308 | orchestrator | 2026-03-29 04:02:39 | INFO  | Trying to run play test in environment openstack 2026-03-29 04:02:49.914357 | orchestrator | 2026-03-29 04:02:49 | INFO  | Task 0c58782f-9184-4a63-9dbe-60da1b0983cc (test) was prepared for execution. 2026-03-29 04:02:49.914444 | orchestrator | 2026-03-29 04:02:49 | INFO  | It takes a moment until task 0c58782f-9184-4a63-9dbe-60da1b0983cc (test) has been started and output is visible here. 2026-03-29 04:05:35.850679 | orchestrator | 2026-03-29 04:05:35.850886 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-29 04:05:35.850901 | orchestrator | 2026-03-29 04:05:35.850908 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-29 04:05:35.850915 | orchestrator | Sunday 29 March 2026 04:02:54 +0000 (0:00:00.070) 0:00:00.070 ********** 2026-03-29 04:05:35.850922 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.850930 | orchestrator | 2026-03-29 04:05:35.850936 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-29 04:05:35.850943 | orchestrator | Sunday 29 March 2026 04:02:58 +0000 (0:00:03.646) 0:00:03.716 ********** 2026-03-29 04:05:35.850949 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.850975 | orchestrator | 2026-03-29 04:05:35.850992 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-29 04:05:35.851006 | orchestrator | Sunday 29 March 2026 04:03:02 +0000 (0:00:04.250) 0:00:07.967 ********** 2026-03-29 04:05:35.851012 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851018 | orchestrator | 2026-03-29 04:05:35.851025 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-29 04:05:35.851031 | orchestrator | Sunday 29 March 2026 04:03:09 +0000 (0:00:06.709) 0:00:14.676 ********** 2026-03-29 04:05:35.851037 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851043 | orchestrator | 2026-03-29 04:05:35.851050 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-29 04:05:35.851056 | orchestrator | Sunday 29 March 2026 04:03:13 +0000 (0:00:04.127) 0:00:18.804 ********** 2026-03-29 04:05:35.851062 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851068 | orchestrator | 2026-03-29 04:05:35.851074 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-29 04:05:35.851081 | orchestrator | Sunday 29 March 2026 04:03:17 +0000 (0:00:04.276) 0:00:23.081 ********** 2026-03-29 04:05:35.851087 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-29 04:05:35.851137 | orchestrator | changed: [localhost] => (item=member) 2026-03-29 04:05:35.851147 | orchestrator | changed: [localhost] => (item=creator) 2026-03-29 04:05:35.851154 | orchestrator | 2026-03-29 04:05:35.851160 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-29 04:05:35.851166 | orchestrator | Sunday 29 March 2026 04:03:28 +0000 (0:00:11.331) 0:00:34.412 ********** 2026-03-29 04:05:35.851172 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851178 | orchestrator | 2026-03-29 04:05:35.851185 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-29 04:05:35.851191 | orchestrator | Sunday 29 March 2026 04:03:33 +0000 (0:00:05.143) 0:00:39.556 ********** 2026-03-29 04:05:35.851198 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851206 | orchestrator | 2026-03-29 04:05:35.851213 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-29 04:05:35.851220 | orchestrator | Sunday 29 March 2026 04:03:38 +0000 (0:00:04.824) 0:00:44.381 ********** 2026-03-29 04:05:35.851228 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851235 | orchestrator | 2026-03-29 04:05:35.851242 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-29 04:05:35.851250 | orchestrator | Sunday 29 March 2026 04:03:42 +0000 (0:00:04.230) 0:00:48.612 ********** 2026-03-29 04:05:35.851257 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851264 | orchestrator | 2026-03-29 04:05:35.851271 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-29 04:05:35.851279 | orchestrator | Sunday 29 March 2026 04:03:46 +0000 (0:00:03.995) 0:00:52.607 ********** 2026-03-29 04:05:35.851286 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851293 | orchestrator | 2026-03-29 04:05:35.851300 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-29 04:05:35.851307 | orchestrator | Sunday 29 March 2026 04:03:50 +0000 (0:00:03.947) 0:00:56.554 ********** 2026-03-29 04:05:35.851314 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851321 | orchestrator | 2026-03-29 04:05:35.851328 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-29 04:05:35.851335 | orchestrator | Sunday 29 March 2026 04:03:54 +0000 (0:00:03.810) 0:01:00.364 ********** 2026-03-29 04:05:35.851342 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851349 | orchestrator | 2026-03-29 04:05:35.851356 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-29 04:05:35.851364 | orchestrator | Sunday 29 March 2026 04:03:59 +0000 (0:00:04.970) 0:01:05.334 ********** 2026-03-29 04:05:35.851371 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851379 | orchestrator | 2026-03-29 04:05:35.851386 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-29 04:05:35.851401 | orchestrator | Sunday 29 March 2026 04:04:04 +0000 (0:00:05.274) 0:01:10.609 ********** 2026-03-29 04:05:35.851409 | orchestrator | changed: [localhost] 2026-03-29 04:05:35.851416 | orchestrator | 2026-03-29 04:05:35.851423 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-29 04:05:35.851430 | orchestrator | 2026-03-29 04:05:35.851437 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-29 04:05:35.851444 | orchestrator | Sunday 29 March 2026 04:04:15 +0000 (0:00:10.797) 0:01:21.406 ********** 2026-03-29 04:05:35.851451 | orchestrator | ok: [localhost] 2026-03-29 04:05:35.851459 | orchestrator | 2026-03-29 04:05:35.851466 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-29 04:05:35.851473 | orchestrator | Sunday 29 March 2026 04:04:19 +0000 (0:00:03.626) 0:01:25.032 ********** 2026-03-29 04:05:35.851481 | orchestrator | skipping: [localhost] 2026-03-29 04:05:35.851488 | orchestrator | 2026-03-29 04:05:35.851496 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-29 04:05:35.851503 | orchestrator | Sunday 29 March 2026 04:04:19 +0000 (0:00:00.063) 0:01:25.096 ********** 2026-03-29 04:05:35.851511 | orchestrator | skipping: [localhost] 2026-03-29 04:05:35.851517 | orchestrator | 2026-03-29 04:05:35.851536 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-29 04:05:35.851543 | orchestrator | Sunday 29 March 2026 04:04:19 +0000 (0:00:00.058) 0:01:25.155 ********** 2026-03-29 04:05:35.851550 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-29 04:05:35.851558 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-29 04:05:35.851580 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-29 04:05:35.851588 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-29 04:05:35.851595 | orchestrator | skipping: [localhost] => (item=test)  2026-03-29 04:05:35.851601 | orchestrator | skipping: [localhost] 2026-03-29 04:05:35.851607 | orchestrator | 2026-03-29 04:05:35.851614 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-29 04:05:35.851620 | orchestrator | Sunday 29 March 2026 04:04:19 +0000 (0:00:00.155) 0:01:25.311 ********** 2026-03-29 04:05:35.851626 | orchestrator | skipping: [localhost] 2026-03-29 04:05:35.851632 | orchestrator | 2026-03-29 04:05:35.851638 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-29 04:05:35.851644 | orchestrator | Sunday 29 March 2026 04:04:19 +0000 (0:00:00.156) 0:01:25.467 ********** 2026-03-29 04:05:35.851651 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 04:05:35.851657 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 04:05:35.851663 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 04:05:35.851669 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 04:05:35.851675 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 04:05:35.851681 | orchestrator | 2026-03-29 04:05:35.851688 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-29 04:05:35.851694 | orchestrator | Sunday 29 March 2026 04:04:24 +0000 (0:00:04.627) 0:01:30.095 ********** 2026-03-29 04:05:35.851720 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-29 04:05:35.851728 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-29 04:05:35.851734 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-29 04:05:35.851741 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-29 04:05:35.851747 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-29 04:05:35.851755 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j102662768036.3690', 'results_file': '/ansible/.ansible_async/j102662768036.3690', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851778 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j731632239026.3715', 'results_file': '/ansible/.ansible_async/j731632239026.3715', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851786 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j688865556070.3740', 'results_file': '/ansible/.ansible_async/j688865556070.3740', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851792 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j291503895185.3765', 'results_file': '/ansible/.ansible_async/j291503895185.3765', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851799 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j593000502111.3790', 'results_file': '/ansible/.ansible_async/j593000502111.3790', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851805 | orchestrator | 2026-03-29 04:05:35.851811 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-29 04:05:35.851817 | orchestrator | Sunday 29 March 2026 04:05:21 +0000 (0:00:57.518) 0:02:27.614 ********** 2026-03-29 04:05:35.851824 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 04:05:35.851830 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 04:05:35.851836 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 04:05:35.851843 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 04:05:35.851849 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 04:05:35.851855 | orchestrator | 2026-03-29 04:05:35.851861 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-29 04:05:35.851867 | orchestrator | Sunday 29 March 2026 04:05:26 +0000 (0:00:04.616) 0:02:32.231 ********** 2026-03-29 04:05:35.851873 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-29 04:05:35.851880 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j804485169168.3901', 'results_file': '/ansible/.ansible_async/j804485169168.3901', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851887 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j182728842839.3926', 'results_file': '/ansible/.ansible_async/j182728842839.3926', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851893 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j194978120479.3951', 'results_file': '/ansible/.ansible_async/j194978120479.3951', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 04:05:35.851905 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j608175813092.3976', 'results_file': '/ansible/.ansible_async/j608175813092.3976', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808541 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j226182688235.4001', 'results_file': '/ansible/.ansible_async/j226182688235.4001', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808642 | orchestrator | 2026-03-29 04:06:15.808654 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-29 04:06:15.808663 | orchestrator | Sunday 29 March 2026 04:05:35 +0000 (0:00:09.246) 0:02:41.477 ********** 2026-03-29 04:06:15.808670 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 04:06:15.808678 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 04:06:15.808685 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 04:06:15.808691 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 04:06:15.808697 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 04:06:15.808727 | orchestrator | 2026-03-29 04:06:15.808734 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-29 04:06:15.808740 | orchestrator | Sunday 29 March 2026 04:05:40 +0000 (0:00:05.044) 0:02:46.522 ********** 2026-03-29 04:06:15.808746 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-29 04:06:15.808754 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j664824190996.4077', 'results_file': '/ansible/.ansible_async/j664824190996.4077', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808762 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j487737021233.4102', 'results_file': '/ansible/.ansible_async/j487737021233.4102', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808782 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j252420874225.4128', 'results_file': '/ansible/.ansible_async/j252420874225.4128', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808789 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j459484789156.4154', 'results_file': '/ansible/.ansible_async/j459484789156.4154', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808822 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j651479808906.4180', 'results_file': '/ansible/.ansible_async/j651479808906.4180', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 04:06:15.808828 | orchestrator | 2026-03-29 04:06:15.808835 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-29 04:06:15.808841 | orchestrator | Sunday 29 March 2026 04:05:50 +0000 (0:00:09.500) 0:02:56.023 ********** 2026-03-29 04:06:15.808847 | orchestrator | changed: [localhost] 2026-03-29 04:06:15.808854 | orchestrator | 2026-03-29 04:06:15.808860 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-29 04:06:15.808866 | orchestrator | Sunday 29 March 2026 04:05:56 +0000 (0:00:06.327) 0:03:02.350 ********** 2026-03-29 04:06:15.808872 | orchestrator | changed: [localhost] 2026-03-29 04:06:15.808878 | orchestrator | 2026-03-29 04:06:15.808884 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-29 04:06:15.808891 | orchestrator | Sunday 29 March 2026 04:06:10 +0000 (0:00:13.699) 0:03:16.050 ********** 2026-03-29 04:06:15.808897 | orchestrator | ok: [localhost] 2026-03-29 04:06:15.808903 | orchestrator | 2026-03-29 04:06:15.808910 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-29 04:06:15.808916 | orchestrator | Sunday 29 March 2026 04:06:15 +0000 (0:00:05.077) 0:03:21.128 ********** 2026-03-29 04:06:15.808922 | orchestrator | ok: [localhost] => { 2026-03-29 04:06:15.808928 | orchestrator |  "msg": "192.168.112.121" 2026-03-29 04:06:15.808935 | orchestrator | } 2026-03-29 04:06:15.808941 | orchestrator | 2026-03-29 04:06:15.808947 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:06:15.808955 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:06:15.808962 | orchestrator | 2026-03-29 04:06:15.808968 | orchestrator | 2026-03-29 04:06:15.808975 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:06:15.808981 | orchestrator | Sunday 29 March 2026 04:06:15 +0000 (0:00:00.039) 0:03:21.167 ********** 2026-03-29 04:06:15.808987 | orchestrator | =============================================================================== 2026-03-29 04:06:15.808993 | orchestrator | Wait for instance creation to complete --------------------------------- 57.52s 2026-03-29 04:06:15.809003 | orchestrator | Attach test volume ----------------------------------------------------- 13.70s 2026-03-29 04:06:15.809015 | orchestrator | Add member roles to user test ------------------------------------------ 11.33s 2026-03-29 04:06:15.809022 | orchestrator | Create test router ----------------------------------------------------- 10.80s 2026-03-29 04:06:15.809028 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.50s 2026-03-29 04:06:15.809034 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.25s 2026-03-29 04:06:15.809040 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.71s 2026-03-29 04:06:15.809060 | orchestrator | Create test volume ------------------------------------------------------ 6.33s 2026-03-29 04:06:15.809068 | orchestrator | Create test subnet ------------------------------------------------------ 5.27s 2026-03-29 04:06:15.809075 | orchestrator | Create test server group ------------------------------------------------ 5.14s 2026-03-29 04:06:15.809082 | orchestrator | Create floating ip address ---------------------------------------------- 5.08s 2026-03-29 04:06:15.809089 | orchestrator | Add tag to instances ---------------------------------------------------- 5.04s 2026-03-29 04:06:15.809097 | orchestrator | Create test network ----------------------------------------------------- 4.97s 2026-03-29 04:06:15.809104 | orchestrator | Create ssh security group ----------------------------------------------- 4.82s 2026-03-29 04:06:15.809111 | orchestrator | Create test instances --------------------------------------------------- 4.63s 2026-03-29 04:06:15.809118 | orchestrator | Add metadata to instances ----------------------------------------------- 4.62s 2026-03-29 04:06:15.809125 | orchestrator | Create test user -------------------------------------------------------- 4.28s 2026-03-29 04:06:15.809132 | orchestrator | Create test-admin user -------------------------------------------------- 4.25s 2026-03-29 04:06:15.809139 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.23s 2026-03-29 04:06:15.809146 | orchestrator | Create test project ----------------------------------------------------- 4.13s 2026-03-29 04:06:16.135504 | orchestrator | + server_list 2026-03-29 04:06:16.135597 | orchestrator | + openstack --os-cloud test server list 2026-03-29 04:06:19.821229 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 04:06:19.821325 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-29 04:06:19.821334 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 04:06:19.821340 | orchestrator | | 7aea18db-c253-4190-8d8f-ad043da81dc6 | test-4 | ACTIVE | test=192.168.112.152, 192.168.200.216 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 04:06:19.821345 | orchestrator | | 2a0426cb-539b-4d88-aa44-5c5963b1227e | test-3 | ACTIVE | test=192.168.112.173, 192.168.200.223 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 04:06:19.821350 | orchestrator | | 08f207aa-6df9-404d-be82-d6cc909e96de | test-1 | ACTIVE | test=192.168.112.182, 192.168.200.4 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 04:06:19.821354 | orchestrator | | 1283780f-2f2a-438e-9b94-9107d8fd6ec6 | test | ACTIVE | test=192.168.112.121, 192.168.200.16 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 04:06:19.821359 | orchestrator | | 572d7177-ed15-4bbe-aa9b-3a5d6e308fd5 | test-2 | ACTIVE | test=192.168.112.195, 192.168.200.34 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 04:06:19.821363 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 04:06:20.095459 | orchestrator | + openstack --os-cloud test server show test 2026-03-29 04:06:23.215655 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:23.215785 | orchestrator | | Field | Value | 2026-03-29 04:06:23.215795 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:23.215804 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 04:06:23.215848 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 04:06:23.215855 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 04:06:23.215861 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-29 04:06:23.215867 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 04:06:23.215874 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 04:06:23.215905 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 04:06:23.215910 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 04:06:23.215920 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 04:06:23.215924 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 04:06:23.215931 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 04:06:23.215935 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 04:06:23.215939 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 04:06:23.215943 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 04:06:23.215947 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 04:06:23.215951 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T04:04:57.000000 | 2026-03-29 04:06:23.215959 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 04:06:23.215970 | orchestrator | | accessIPv4 | | 2026-03-29 04:06:23.215974 | orchestrator | | accessIPv6 | | 2026-03-29 04:06:23.215997 | orchestrator | | addresses | test=192.168.112.121, 192.168.200.16 | 2026-03-29 04:06:23.216002 | orchestrator | | config_drive | | 2026-03-29 04:06:23.216006 | orchestrator | | created | 2026-03-29T04:04:29Z | 2026-03-29 04:06:23.216010 | orchestrator | | description | None | 2026-03-29 04:06:23.216014 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 04:06:23.216018 | orchestrator | | hostId | c23654a2f5f01305c5e852037f2bc6b540db1329bce553a58a6ba25a | 2026-03-29 04:06:23.216022 | orchestrator | | host_status | None | 2026-03-29 04:06:23.216033 | orchestrator | | id | 1283780f-2f2a-438e-9b94-9107d8fd6ec6 | 2026-03-29 04:06:23.216037 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 04:06:23.216041 | orchestrator | | key_name | test | 2026-03-29 04:06:23.216048 | orchestrator | | locked | False | 2026-03-29 04:06:23.216052 | orchestrator | | locked_reason | None | 2026-03-29 04:06:23.216056 | orchestrator | | name | test | 2026-03-29 04:06:23.216060 | orchestrator | | pinned_availability_zone | None | 2026-03-29 04:06:23.216064 | orchestrator | | progress | 0 | 2026-03-29 04:06:23.216068 | orchestrator | | project_id | be3015d7baa44f3a8c06ccb1e80f7a7e | 2026-03-29 04:06:23.216081 | orchestrator | | properties | hostname='test' | 2026-03-29 04:06:23.216088 | orchestrator | | security_groups | name='ssh' | 2026-03-29 04:06:23.216092 | orchestrator | | | name='icmp' | 2026-03-29 04:06:23.216096 | orchestrator | | server_groups | None | 2026-03-29 04:06:23.216100 | orchestrator | | status | ACTIVE | 2026-03-29 04:06:23.216108 | orchestrator | | tags | test | 2026-03-29 04:06:23.216113 | orchestrator | | trusted_image_certificates | None | 2026-03-29 04:06:23.216117 | orchestrator | | updated | 2026-03-29T04:05:27Z | 2026-03-29 04:06:23.216121 | orchestrator | | user_id | 64efa12238844e15bbe534b4351b8e1f | 2026-03-29 04:06:23.216124 | orchestrator | | volumes_attached | delete_on_termination='True', id='b2efb0b3-b53f-4e9a-a54c-b29720657bee' | 2026-03-29 04:06:23.216132 | orchestrator | | | delete_on_termination='False', id='fc9ba010-6831-44f2-a7c0-1792316068e9' | 2026-03-29 04:06:23.220734 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:23.478418 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-29 04:06:26.570806 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:26.570922 | orchestrator | | Field | Value | 2026-03-29 04:06:26.570955 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:26.570968 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 04:06:26.570979 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 04:06:26.570989 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 04:06:26.571000 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-29 04:06:26.571030 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 04:06:26.571041 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 04:06:26.571072 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 04:06:26.571084 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 04:06:26.571094 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 04:06:26.571111 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 04:06:26.571119 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 04:06:26.571125 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 04:06:26.571132 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 04:06:26.571144 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 04:06:26.571151 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 04:06:26.571169 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T04:04:57.000000 | 2026-03-29 04:06:26.571182 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 04:06:26.571188 | orchestrator | | accessIPv4 | | 2026-03-29 04:06:26.571204 | orchestrator | | accessIPv6 | | 2026-03-29 04:06:26.571214 | orchestrator | | addresses | test=192.168.112.182, 192.168.200.4 | 2026-03-29 04:06:26.571220 | orchestrator | | config_drive | | 2026-03-29 04:06:26.571228 | orchestrator | | created | 2026-03-29T04:04:29Z | 2026-03-29 04:06:26.571240 | orchestrator | | description | None | 2026-03-29 04:06:26.571247 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 04:06:26.571253 | orchestrator | | hostId | c23654a2f5f01305c5e852037f2bc6b540db1329bce553a58a6ba25a | 2026-03-29 04:06:26.571260 | orchestrator | | host_status | None | 2026-03-29 04:06:26.571272 | orchestrator | | id | 08f207aa-6df9-404d-be82-d6cc909e96de | 2026-03-29 04:06:26.571279 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 04:06:26.571286 | orchestrator | | key_name | test | 2026-03-29 04:06:26.571296 | orchestrator | | locked | False | 2026-03-29 04:06:26.571303 | orchestrator | | locked_reason | None | 2026-03-29 04:06:26.571309 | orchestrator | | name | test-1 | 2026-03-29 04:06:26.571320 | orchestrator | | pinned_availability_zone | None | 2026-03-29 04:06:26.571327 | orchestrator | | progress | 0 | 2026-03-29 04:06:26.571333 | orchestrator | | project_id | be3015d7baa44f3a8c06ccb1e80f7a7e | 2026-03-29 04:06:26.571340 | orchestrator | | properties | hostname='test-1' | 2026-03-29 04:06:26.571352 | orchestrator | | security_groups | name='ssh' | 2026-03-29 04:06:26.571359 | orchestrator | | | name='icmp' | 2026-03-29 04:06:26.571365 | orchestrator | | server_groups | None | 2026-03-29 04:06:26.571375 | orchestrator | | status | ACTIVE | 2026-03-29 04:06:26.571382 | orchestrator | | tags | test | 2026-03-29 04:06:26.571393 | orchestrator | | trusted_image_certificates | None | 2026-03-29 04:06:26.571399 | orchestrator | | updated | 2026-03-29T04:05:28Z | 2026-03-29 04:06:26.571406 | orchestrator | | user_id | 64efa12238844e15bbe534b4351b8e1f | 2026-03-29 04:06:26.571412 | orchestrator | | volumes_attached | delete_on_termination='True', id='00bfbc63-83a9-41aa-9746-dd0513fd3427' | 2026-03-29 04:06:26.575242 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:26.854343 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-29 04:06:29.848287 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:29.848365 | orchestrator | | Field | Value | 2026-03-29 04:06:29.848373 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:29.848378 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 04:06:29.848394 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 04:06:29.848398 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 04:06:29.848403 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-29 04:06:29.848413 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 04:06:29.848418 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 04:06:29.848432 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 04:06:29.848436 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 04:06:29.848442 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 04:06:29.848451 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 04:06:29.848461 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 04:06:29.848468 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 04:06:29.848474 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 04:06:29.848481 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 04:06:29.848487 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 04:06:29.848493 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T04:04:57.000000 | 2026-03-29 04:06:29.848504 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 04:06:29.848510 | orchestrator | | accessIPv4 | | 2026-03-29 04:06:29.848516 | orchestrator | | accessIPv6 | | 2026-03-29 04:06:29.848529 | orchestrator | | addresses | test=192.168.112.195, 192.168.200.34 | 2026-03-29 04:06:29.848536 | orchestrator | | config_drive | | 2026-03-29 04:06:29.848541 | orchestrator | | created | 2026-03-29T04:04:29Z | 2026-03-29 04:06:29.848547 | orchestrator | | description | None | 2026-03-29 04:06:29.848554 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 04:06:29.848560 | orchestrator | | hostId | c23654a2f5f01305c5e852037f2bc6b540db1329bce553a58a6ba25a | 2026-03-29 04:06:29.848566 | orchestrator | | host_status | None | 2026-03-29 04:06:29.848578 | orchestrator | | id | 572d7177-ed15-4bbe-aa9b-3a5d6e308fd5 | 2026-03-29 04:06:29.848585 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 04:06:29.848591 | orchestrator | | key_name | test | 2026-03-29 04:06:29.848605 | orchestrator | | locked | False | 2026-03-29 04:06:29.848612 | orchestrator | | locked_reason | None | 2026-03-29 04:06:29.848618 | orchestrator | | name | test-2 | 2026-03-29 04:06:29.848686 | orchestrator | | pinned_availability_zone | None | 2026-03-29 04:06:29.848696 | orchestrator | | progress | 0 | 2026-03-29 04:06:29.848703 | orchestrator | | project_id | be3015d7baa44f3a8c06ccb1e80f7a7e | 2026-03-29 04:06:29.848709 | orchestrator | | properties | hostname='test-2' | 2026-03-29 04:06:29.848723 | orchestrator | | security_groups | name='ssh' | 2026-03-29 04:06:29.848729 | orchestrator | | | name='icmp' | 2026-03-29 04:06:29.848738 | orchestrator | | server_groups | None | 2026-03-29 04:06:29.848745 | orchestrator | | status | ACTIVE | 2026-03-29 04:06:29.848749 | orchestrator | | tags | test | 2026-03-29 04:06:29.848753 | orchestrator | | trusted_image_certificates | None | 2026-03-29 04:06:29.848756 | orchestrator | | updated | 2026-03-29T04:05:29Z | 2026-03-29 04:06:29.848760 | orchestrator | | user_id | 64efa12238844e15bbe534b4351b8e1f | 2026-03-29 04:06:29.848764 | orchestrator | | volumes_attached | delete_on_termination='True', id='cdf6008b-269b-49b0-9e1e-93d1c15f0294' | 2026-03-29 04:06:29.854581 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:30.136174 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-29 04:06:33.204586 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:33.204714 | orchestrator | | Field | Value | 2026-03-29 04:06:33.204723 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:33.204738 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 04:06:33.204744 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 04:06:33.204750 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 04:06:33.204756 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-29 04:06:33.204762 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 04:06:33.204769 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 04:06:33.204790 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 04:06:33.204802 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 04:06:33.204809 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 04:06:33.204815 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 04:06:33.204825 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 04:06:33.204870 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 04:06:33.204876 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 04:06:33.204881 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 04:06:33.204887 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 04:06:33.204893 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T04:04:59.000000 | 2026-03-29 04:06:33.204917 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 04:06:33.204923 | orchestrator | | accessIPv4 | | 2026-03-29 04:06:33.204929 | orchestrator | | accessIPv6 | | 2026-03-29 04:06:33.204935 | orchestrator | | addresses | test=192.168.112.173, 192.168.200.223 | 2026-03-29 04:06:33.204941 | orchestrator | | config_drive | | 2026-03-29 04:06:33.204948 | orchestrator | | created | 2026-03-29T04:04:31Z | 2026-03-29 04:06:33.204954 | orchestrator | | description | None | 2026-03-29 04:06:33.204960 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 04:06:33.204966 | orchestrator | | hostId | c23654a2f5f01305c5e852037f2bc6b540db1329bce553a58a6ba25a | 2026-03-29 04:06:33.204973 | orchestrator | | host_status | None | 2026-03-29 04:06:33.205297 | orchestrator | | id | 2a0426cb-539b-4d88-aa44-5c5963b1227e | 2026-03-29 04:06:33.205320 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 04:06:33.205325 | orchestrator | | key_name | test | 2026-03-29 04:06:33.205330 | orchestrator | | locked | False | 2026-03-29 04:06:33.205335 | orchestrator | | locked_reason | None | 2026-03-29 04:06:33.205339 | orchestrator | | name | test-3 | 2026-03-29 04:06:33.205344 | orchestrator | | pinned_availability_zone | None | 2026-03-29 04:06:33.205348 | orchestrator | | progress | 0 | 2026-03-29 04:06:33.205353 | orchestrator | | project_id | be3015d7baa44f3a8c06ccb1e80f7a7e | 2026-03-29 04:06:33.205363 | orchestrator | | properties | hostname='test-3' | 2026-03-29 04:06:33.205375 | orchestrator | | security_groups | name='ssh' | 2026-03-29 04:06:33.205389 | orchestrator | | | name='icmp' | 2026-03-29 04:06:33.205394 | orchestrator | | server_groups | None | 2026-03-29 04:06:33.205399 | orchestrator | | status | ACTIVE | 2026-03-29 04:06:33.205404 | orchestrator | | tags | test | 2026-03-29 04:06:33.205408 | orchestrator | | trusted_image_certificates | None | 2026-03-29 04:06:33.205413 | orchestrator | | updated | 2026-03-29T04:05:30Z | 2026-03-29 04:06:33.205417 | orchestrator | | user_id | 64efa12238844e15bbe534b4351b8e1f | 2026-03-29 04:06:33.205425 | orchestrator | | volumes_attached | delete_on_termination='True', id='96ba1f23-9258-4b7e-9a56-2fd4d2fe9d13' | 2026-03-29 04:06:33.208682 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:33.476005 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-29 04:06:36.512586 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:36.512670 | orchestrator | | Field | Value | 2026-03-29 04:06:36.512682 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:36.512687 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 04:06:36.512692 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 04:06:36.512697 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 04:06:36.512702 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-29 04:06:36.512722 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 04:06:36.512727 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 04:06:36.512743 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 04:06:36.512752 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 04:06:36.512757 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 04:06:36.512762 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 04:06:36.512767 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 04:06:36.512772 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 04:06:36.512776 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 04:06:36.512785 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 04:06:36.512790 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 04:06:36.512795 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T04:05:02.000000 | 2026-03-29 04:06:36.512803 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 04:06:36.512815 | orchestrator | | accessIPv4 | | 2026-03-29 04:06:36.512820 | orchestrator | | accessIPv6 | | 2026-03-29 04:06:36.512825 | orchestrator | | addresses | test=192.168.112.152, 192.168.200.216 | 2026-03-29 04:06:36.512830 | orchestrator | | config_drive | | 2026-03-29 04:06:36.512854 | orchestrator | | created | 2026-03-29T04:04:33Z | 2026-03-29 04:06:36.512860 | orchestrator | | description | None | 2026-03-29 04:06:36.512868 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 04:06:36.512873 | orchestrator | | hostId | c23654a2f5f01305c5e852037f2bc6b540db1329bce553a58a6ba25a | 2026-03-29 04:06:36.512878 | orchestrator | | host_status | None | 2026-03-29 04:06:36.512887 | orchestrator | | id | 7aea18db-c253-4190-8d8f-ad043da81dc6 | 2026-03-29 04:06:36.512895 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 04:06:36.512900 | orchestrator | | key_name | test | 2026-03-29 04:06:36.512904 | orchestrator | | locked | False | 2026-03-29 04:06:36.512909 | orchestrator | | locked_reason | None | 2026-03-29 04:06:36.512914 | orchestrator | | name | test-4 | 2026-03-29 04:06:36.512960 | orchestrator | | pinned_availability_zone | None | 2026-03-29 04:06:36.512965 | orchestrator | | progress | 0 | 2026-03-29 04:06:36.512970 | orchestrator | | project_id | be3015d7baa44f3a8c06ccb1e80f7a7e | 2026-03-29 04:06:36.512974 | orchestrator | | properties | hostname='test-4' | 2026-03-29 04:06:36.512984 | orchestrator | | security_groups | name='ssh' | 2026-03-29 04:06:36.512992 | orchestrator | | | name='icmp' | 2026-03-29 04:06:36.512997 | orchestrator | | server_groups | None | 2026-03-29 04:06:36.513003 | orchestrator | | status | ACTIVE | 2026-03-29 04:06:36.513007 | orchestrator | | tags | test | 2026-03-29 04:06:36.513016 | orchestrator | | trusted_image_certificates | None | 2026-03-29 04:06:36.513021 | orchestrator | | updated | 2026-03-29T04:05:31Z | 2026-03-29 04:06:36.513026 | orchestrator | | user_id | 64efa12238844e15bbe534b4351b8e1f | 2026-03-29 04:06:36.513030 | orchestrator | | volumes_attached | delete_on_termination='True', id='81d507f8-a0d0-4d6e-bb87-f8df19e41c93' | 2026-03-29 04:06:36.517754 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 04:06:36.782456 | orchestrator | + server_ping 2026-03-29 04:06:36.784226 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 04:06:36.784292 | orchestrator | ++ tr -d '\r' 2026-03-29 04:06:39.630091 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 04:06:39.630169 | orchestrator | + ping -c3 192.168.112.182 2026-03-29 04:06:39.647645 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-29 04:06:39.647737 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=11.6 ms 2026-03-29 04:06:40.640386 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.36 ms 2026-03-29 04:06:41.641718 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.95 ms 2026-03-29 04:06:41.641841 | orchestrator | 2026-03-29 04:06:41.642162 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-29 04:06:41.642191 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 04:06:41.642202 | orchestrator | rtt min/avg/max/mdev = 1.948/5.308/11.621/4.466 ms 2026-03-29 04:06:41.642225 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 04:06:41.642236 | orchestrator | + ping -c3 192.168.112.121 2026-03-29 04:06:41.653448 | orchestrator | PING 192.168.112.121 (192.168.112.121) 56(84) bytes of data. 2026-03-29 04:06:41.653537 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=1 ttl=63 time=6.22 ms 2026-03-29 04:06:42.650218 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=2 ttl=63 time=2.48 ms 2026-03-29 04:06:43.652660 | orchestrator | 64 bytes from 192.168.112.121: icmp_seq=3 ttl=63 time=2.36 ms 2026-03-29 04:06:43.652756 | orchestrator | 2026-03-29 04:06:43.652767 | orchestrator | --- 192.168.112.121 ping statistics --- 2026-03-29 04:06:43.652775 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 04:06:43.652806 | orchestrator | rtt min/avg/max/mdev = 2.362/3.687/6.221/1.792 ms 2026-03-29 04:06:43.653644 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 04:06:43.653661 | orchestrator | + ping -c3 192.168.112.152 2026-03-29 04:06:43.664609 | orchestrator | PING 192.168.112.152 (192.168.112.152) 56(84) bytes of data. 2026-03-29 04:06:43.664736 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=1 ttl=63 time=7.85 ms 2026-03-29 04:06:44.661323 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=2 ttl=63 time=2.39 ms 2026-03-29 04:06:45.662566 | orchestrator | 64 bytes from 192.168.112.152: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-29 04:06:45.662643 | orchestrator | 2026-03-29 04:06:45.662651 | orchestrator | --- 192.168.112.152 ping statistics --- 2026-03-29 04:06:45.662657 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 04:06:45.662663 | orchestrator | rtt min/avg/max/mdev = 1.928/4.055/7.847/2.687 ms 2026-03-29 04:06:45.663162 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 04:06:45.663181 | orchestrator | + ping -c3 192.168.112.195 2026-03-29 04:06:45.672569 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-03-29 04:06:45.672642 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=5.39 ms 2026-03-29 04:06:46.672066 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=3.04 ms 2026-03-29 04:06:47.672209 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=1.81 ms 2026-03-29 04:06:47.672321 | orchestrator | 2026-03-29 04:06:47.672342 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-03-29 04:06:47.672357 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 04:06:47.672371 | orchestrator | rtt min/avg/max/mdev = 1.812/3.415/5.393/1.485 ms 2026-03-29 04:06:47.673194 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 04:06:47.673223 | orchestrator | + ping -c3 192.168.112.173 2026-03-29 04:06:47.683929 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-03-29 04:06:47.684007 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=5.68 ms 2026-03-29 04:06:48.682488 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.67 ms 2026-03-29 04:06:49.683657 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=1.86 ms 2026-03-29 04:06:49.683753 | orchestrator | 2026-03-29 04:06:49.683760 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-03-29 04:06:49.683766 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 04:06:49.683771 | orchestrator | rtt min/avg/max/mdev = 1.855/3.401/5.681/1.645 ms 2026-03-29 04:06:49.684183 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 04:06:49.811753 | orchestrator | ok: Runtime: 0:08:06.418195 2026-03-29 04:06:49.852155 | 2026-03-29 04:06:49.852304 | TASK [Run tempest] 2026-03-29 04:06:50.387641 | orchestrator | skipping: Conditional result was False 2026-03-29 04:06:50.405926 | 2026-03-29 04:06:50.406162 | TASK [Check prometheus alert status] 2026-03-29 04:06:50.941864 | orchestrator | skipping: Conditional result was False 2026-03-29 04:06:50.952769 | 2026-03-29 04:06:50.952930 | PLAY [Upgrade testbed] 2026-03-29 04:06:50.965454 | 2026-03-29 04:06:50.965583 | TASK [Print next ceph version] 2026-03-29 04:06:51.045732 | orchestrator | ok 2026-03-29 04:06:51.060078 | 2026-03-29 04:06:51.060240 | TASK [Print next openstack version] 2026-03-29 04:06:51.134203 | orchestrator | ok 2026-03-29 04:06:51.148351 | 2026-03-29 04:06:51.148491 | TASK [Print next manager version] 2026-03-29 04:06:51.225199 | orchestrator | ok 2026-03-29 04:06:51.234354 | 2026-03-29 04:06:51.234485 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 04:06:51.285479 | orchestrator | ok 2026-03-29 04:06:51.296471 | 2026-03-29 04:06:51.296620 | TASK [Set cloud fact (local deployment)] 2026-03-29 04:06:51.323307 | orchestrator | skipping: Conditional result was False 2026-03-29 04:06:51.340787 | 2026-03-29 04:06:51.340988 | TASK [Fetch manager address] 2026-03-29 04:06:51.663755 | orchestrator | ok 2026-03-29 04:06:51.673392 | 2026-03-29 04:06:51.673519 | TASK [Set manager_host address] 2026-03-29 04:06:51.744105 | orchestrator | ok 2026-03-29 04:06:51.755829 | 2026-03-29 04:06:51.755993 | TASK [Run upgrade] 2026-03-29 04:06:52.463097 | orchestrator | + set -e 2026-03-29 04:06:52.463283 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:06:52.463298 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:06:52.463312 | orchestrator | + CEPH_VERSION=reef 2026-03-29 04:06:52.463319 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-29 04:06:52.463326 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-29 04:06:52.463340 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-29 04:06:52.471849 | orchestrator | + set -e 2026-03-29 04:06:52.471954 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:06:52.471965 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:06:52.471976 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:06:52.471982 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:06:52.471993 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:06:52.472642 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-29 04:06:52.503338 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-29 04:06:52.504293 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-29 04:06:52.537081 | orchestrator | 2026-03-29 04:06:52.537154 | orchestrator | # UPGRADE MANAGER 2026-03-29 04:06:52.537162 | orchestrator | 2026-03-29 04:06:52.537166 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-29 04:06:52.537171 | orchestrator | + echo 2026-03-29 04:06:52.537176 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-29 04:06:52.537182 | orchestrator | + echo 2026-03-29 04:06:52.537186 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:06:52.537191 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:06:52.537195 | orchestrator | + CEPH_VERSION=reef 2026-03-29 04:06:52.537199 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-29 04:06:52.537210 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-29 04:06:52.537214 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-29 04:06:52.542172 | orchestrator | + set -e 2026-03-29 04:06:52.542247 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-29 04:06:52.542255 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:06:52.545387 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-29 04:06:52.545439 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:06:52.550149 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:06:52.553247 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-29 04:06:52.560736 | orchestrator | /opt/configuration ~ 2026-03-29 04:06:52.560813 | orchestrator | + set -e 2026-03-29 04:06:52.560822 | orchestrator | + pushd /opt/configuration 2026-03-29 04:06:52.560828 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 04:06:52.560837 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 04:06:52.561693 | orchestrator | ++ deactivate nondestructive 2026-03-29 04:06:52.561748 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:52.561755 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:52.561761 | orchestrator | ++ hash -r 2026-03-29 04:06:52.561773 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:52.561779 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 04:06:52.561784 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 04:06:52.561790 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 04:06:52.561797 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 04:06:52.561803 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 04:06:52.561808 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 04:06:52.561814 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 04:06:52.561819 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:52.561828 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:52.561833 | orchestrator | ++ export PATH 2026-03-29 04:06:52.561918 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:52.562129 | orchestrator | ++ '[' -z '' ']' 2026-03-29 04:06:52.562138 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 04:06:52.562144 | orchestrator | ++ PS1='(venv) ' 2026-03-29 04:06:52.562149 | orchestrator | ++ export PS1 2026-03-29 04:06:52.562154 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 04:06:52.562160 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 04:06:52.562166 | orchestrator | ++ hash -r 2026-03-29 04:06:52.562175 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-29 04:06:53.784791 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-29 04:06:53.785722 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-29 04:06:53.787127 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-29 04:06:53.789385 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-29 04:06:53.790735 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-29 04:06:53.801455 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-29 04:06:53.803180 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-29 04:06:53.804231 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-29 04:06:53.805609 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-29 04:06:53.840859 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-29 04:06:53.842788 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-29 04:06:53.844328 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-29 04:06:53.845665 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-29 04:06:53.849532 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-29 04:06:54.130490 | orchestrator | ++ which gilt 2026-03-29 04:06:54.132166 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-29 04:06:54.132224 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-29 04:06:54.361711 | orchestrator | osism.cfg-generics: 2026-03-29 04:06:54.474484 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-29 04:06:54.475616 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-29 04:06:54.477621 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-29 04:06:54.477693 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-29 04:06:55.536776 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-29 04:06:55.546068 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-29 04:06:55.883385 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-29 04:06:55.936699 | orchestrator | ~ 2026-03-29 04:06:55.936792 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 04:06:55.936800 | orchestrator | + deactivate 2026-03-29 04:06:55.936806 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 04:06:55.936813 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:55.936817 | orchestrator | + export PATH 2026-03-29 04:06:55.936821 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 04:06:55.936826 | orchestrator | + '[' -n '' ']' 2026-03-29 04:06:55.936830 | orchestrator | + hash -r 2026-03-29 04:06:55.936834 | orchestrator | + '[' -n '' ']' 2026-03-29 04:06:55.936838 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 04:06:55.936842 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 04:06:55.936846 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 04:06:55.936850 | orchestrator | + unset -f deactivate 2026-03-29 04:06:55.936854 | orchestrator | + popd 2026-03-29 04:06:55.938415 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-29 04:06:55.938485 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-29 04:06:55.942087 | orchestrator | + set -e 2026-03-29 04:06:55.942132 | orchestrator | + NAMESPACE=kolla/release 2026-03-29 04:06:55.942140 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-29 04:06:55.950326 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-29 04:06:55.955822 | orchestrator | /opt/configuration ~ 2026-03-29 04:06:55.955916 | orchestrator | + set -e 2026-03-29 04:06:55.955923 | orchestrator | + pushd /opt/configuration 2026-03-29 04:06:55.955927 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 04:06:55.955932 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 04:06:55.955936 | orchestrator | ++ deactivate nondestructive 2026-03-29 04:06:55.955940 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:55.955944 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:55.955948 | orchestrator | ++ hash -r 2026-03-29 04:06:55.955952 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:55.955956 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 04:06:55.955960 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 04:06:55.955965 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 04:06:55.955973 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 04:06:55.955979 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 04:06:55.955984 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 04:06:55.956036 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 04:06:55.956046 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:55.956054 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:55.956061 | orchestrator | ++ export PATH 2026-03-29 04:06:55.956067 | orchestrator | ++ '[' -n '' ']' 2026-03-29 04:06:55.956073 | orchestrator | ++ '[' -z '' ']' 2026-03-29 04:06:55.956079 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 04:06:55.956085 | orchestrator | ++ PS1='(venv) ' 2026-03-29 04:06:55.956091 | orchestrator | ++ export PS1 2026-03-29 04:06:55.956097 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 04:06:55.956102 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 04:06:55.956109 | orchestrator | ++ hash -r 2026-03-29 04:06:55.956115 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-29 04:06:56.505525 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-29 04:06:56.506493 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-29 04:06:56.508144 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-29 04:06:56.509474 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-29 04:06:56.510789 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-29 04:06:56.521070 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-29 04:06:56.522538 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-29 04:06:56.523627 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-29 04:06:56.525043 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-29 04:06:56.564317 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-29 04:06:56.566214 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-29 04:06:56.567713 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-29 04:06:56.569282 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-29 04:06:56.573202 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-29 04:06:56.814420 | orchestrator | ++ which gilt 2026-03-29 04:06:56.815398 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-29 04:06:56.815462 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-29 04:06:57.002521 | orchestrator | osism.cfg-generics: 2026-03-29 04:06:57.073763 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-29 04:06:57.073852 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-29 04:06:57.073872 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-29 04:06:57.073905 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-29 04:06:57.841612 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-29 04:06:57.854563 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-29 04:06:58.211841 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-29 04:06:58.272072 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 04:06:58.272185 | orchestrator | + deactivate 2026-03-29 04:06:58.272234 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 04:06:58.272254 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 04:06:58.272269 | orchestrator | + export PATH 2026-03-29 04:06:58.272283 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 04:06:58.272299 | orchestrator | + '[' -n '' ']' 2026-03-29 04:06:58.272308 | orchestrator | + hash -r 2026-03-29 04:06:58.272316 | orchestrator | + '[' -n '' ']' 2026-03-29 04:06:58.272324 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 04:06:58.272333 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 04:06:58.272354 | orchestrator | ~ 2026-03-29 04:06:58.272363 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 04:06:58.272371 | orchestrator | + unset -f deactivate 2026-03-29 04:06:58.272379 | orchestrator | + popd 2026-03-29 04:06:58.274454 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-29 04:06:58.339789 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 04:06:58.340454 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-29 04:06:58.442998 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 04:06:58.443235 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-29 04:06:58.449220 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-29 04:06:58.455811 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-29 04:06:58.518780 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-29 04:06:58.519939 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-29 04:06:58.622820 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-29 04:06:58.622968 | orchestrator | ++ echo true 2026-03-29 04:06:58.623347 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-29 04:06:58.625392 | orchestrator | +++ semver 2024.2 2024.2 2026-03-29 04:06:58.707105 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-29 04:06:58.707196 | orchestrator | +++ semver 2024.2 2025.1 2026-03-29 04:06:58.775003 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-29 04:06:58.775081 | orchestrator | ++ echo false 2026-03-29 04:06:58.776685 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-29 04:06:58.776723 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 04:06:58.776731 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-29 04:06:58.776738 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-29 04:06:58.776748 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:06:58.782639 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-29 04:06:58.782715 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-29 04:06:58.805729 | orchestrator | export RABBITMQ3TO4=true 2026-03-29 04:06:58.811573 | orchestrator | + osism update manager 2026-03-29 04:07:04.515527 | orchestrator | Collecting uv 2026-03-29 04:07:04.627394 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-29 04:07:04.651785 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-03-29 04:07:05.457676 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 29.6 MB/s eta 0:00:00 2026-03-29 04:07:05.528685 | orchestrator | Installing collected packages: uv 2026-03-29 04:07:05.994257 | orchestrator | Successfully installed uv-0.11.2 2026-03-29 04:07:06.678001 | orchestrator | Resolved 11 packages in 408ms 2026-03-29 04:07:06.705830 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-29 04:07:06.706508 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-29 04:07:06.706677 | orchestrator | Downloading ansible (54.5MiB) 2026-03-29 04:07:06.706829 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-29 04:07:07.085188 | orchestrator | Downloaded netaddr 2026-03-29 04:07:07.231934 | orchestrator | Downloaded ansible-core 2026-03-29 04:07:07.235647 | orchestrator | Downloaded cryptography 2026-03-29 04:07:13.635630 | orchestrator | Downloaded ansible 2026-03-29 04:07:13.635833 | orchestrator | Prepared 11 packages in 6.95s 2026-03-29 04:07:14.148512 | orchestrator | Installed 11 packages in 511ms 2026-03-29 04:07:14.148610 | orchestrator | + ansible==11.11.0 2026-03-29 04:07:14.148621 | orchestrator | + ansible-core==2.18.15 2026-03-29 04:07:14.148628 | orchestrator | + cffi==2.0.0 2026-03-29 04:07:14.148636 | orchestrator | + cryptography==46.0.6 2026-03-29 04:07:14.148643 | orchestrator | + jinja2==3.1.6 2026-03-29 04:07:14.148649 | orchestrator | + markupsafe==3.0.3 2026-03-29 04:07:14.148655 | orchestrator | + netaddr==1.3.0 2026-03-29 04:07:14.148662 | orchestrator | + packaging==26.0 2026-03-29 04:07:14.148667 | orchestrator | + pycparser==3.0 2026-03-29 04:07:14.148674 | orchestrator | + pyyaml==6.0.3 2026-03-29 04:07:14.148681 | orchestrator | + resolvelib==1.0.1 2026-03-29 04:07:15.302544 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-198641ga9reb0e/tmpib0hq9lk/ansible-collection-services1e1mc606'... 2026-03-29 04:07:16.747065 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-29 04:07:16.747140 | orchestrator | Already on 'main' 2026-03-29 04:07:17.205266 | orchestrator | Starting galaxy collection install process 2026-03-29 04:07:17.205360 | orchestrator | Process install dependency map 2026-03-29 04:07:17.205370 | orchestrator | Starting collection install process 2026-03-29 04:07:17.205378 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-29 04:07:17.205386 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-29 04:07:17.205393 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-29 04:07:17.733430 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-198700xq58tblz/tmpaz0ztkpw/ansible-playbooks-managerfrfofnlg'... 2026-03-29 04:07:18.326153 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-29 04:07:18.326239 | orchestrator | Already on 'main' 2026-03-29 04:07:18.594610 | orchestrator | Starting galaxy collection install process 2026-03-29 04:07:18.594683 | orchestrator | Process install dependency map 2026-03-29 04:07:18.594690 | orchestrator | Starting collection install process 2026-03-29 04:07:18.594695 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-29 04:07:18.594701 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-29 04:07:18.594705 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-29 04:07:19.253198 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-29 04:07:19.253297 | orchestrator | -vvvv to see details 2026-03-29 04:07:19.673416 | orchestrator | 2026-03-29 04:07:19.673533 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-29 04:07:19.673549 | orchestrator | 2026-03-29 04:07:19.673558 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 04:07:23.608892 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:23.609123 | orchestrator | 2026-03-29 04:07:23.609148 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-29 04:07:23.689602 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 04:07:23.689689 | orchestrator | 2026-03-29 04:07:23.689721 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-29 04:07:25.530848 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:25.530975 | orchestrator | 2026-03-29 04:07:25.530991 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-29 04:07:25.593681 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:25.593763 | orchestrator | 2026-03-29 04:07:25.593772 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-29 04:07:25.663434 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-29 04:07:25.663549 | orchestrator | 2026-03-29 04:07:25.663572 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-29 04:07:30.043010 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-29 04:07:30.043116 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-29 04:07:30.043135 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-29 04:07:30.043155 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-29 04:07:30.043162 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-29 04:07:30.043169 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-29 04:07:30.043176 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-29 04:07:30.043183 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-29 04:07:30.043190 | orchestrator | 2026-03-29 04:07:30.043198 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-29 04:07:31.166930 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:31.167075 | orchestrator | 2026-03-29 04:07:31.167097 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-29 04:07:32.089668 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:32.089754 | orchestrator | 2026-03-29 04:07:32.089765 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-29 04:07:32.203469 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-29 04:07:32.203578 | orchestrator | 2026-03-29 04:07:32.203594 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-29 04:07:34.084017 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-29 04:07:34.084103 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-29 04:07:34.084113 | orchestrator | 2026-03-29 04:07:34.084121 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-29 04:07:35.051637 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:35.051738 | orchestrator | 2026-03-29 04:07:35.051749 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-29 04:07:35.125578 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:07:35.125673 | orchestrator | 2026-03-29 04:07:35.125686 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-29 04:07:35.209433 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-29 04:07:35.209520 | orchestrator | 2026-03-29 04:07:35.209530 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-29 04:07:36.130747 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:36.130838 | orchestrator | 2026-03-29 04:07:36.130850 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-29 04:07:36.208382 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-29 04:07:36.208457 | orchestrator | 2026-03-29 04:07:36.208466 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-29 04:07:38.254678 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-29 04:07:38.254762 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-29 04:07:38.254772 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:38.254782 | orchestrator | 2026-03-29 04:07:38.254789 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-29 04:07:39.259295 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:39.259415 | orchestrator | 2026-03-29 04:07:39.259428 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-29 04:07:39.323755 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:07:39.323875 | orchestrator | 2026-03-29 04:07:39.323892 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-29 04:07:39.431304 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-29 04:07:39.431401 | orchestrator | 2026-03-29 04:07:39.431414 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-29 04:07:40.140735 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:40.140828 | orchestrator | 2026-03-29 04:07:40.140840 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-29 04:07:40.641718 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:40.641792 | orchestrator | 2026-03-29 04:07:40.641800 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-29 04:07:42.504658 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-29 04:07:42.504767 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-29 04:07:42.504782 | orchestrator | 2026-03-29 04:07:42.504794 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-29 04:07:43.654205 | orchestrator | changed: [testbed-manager] 2026-03-29 04:07:43.654309 | orchestrator | 2026-03-29 04:07:43.654325 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-29 04:07:44.215677 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:44.215767 | orchestrator | 2026-03-29 04:07:44.215778 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-29 04:07:44.752859 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:44.752932 | orchestrator | 2026-03-29 04:07:44.752953 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-29 04:07:44.809610 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:07:44.809702 | orchestrator | 2026-03-29 04:07:44.809719 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-29 04:07:44.880173 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-29 04:07:44.880297 | orchestrator | 2026-03-29 04:07:44.880318 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-29 04:07:44.933624 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:44.933746 | orchestrator | 2026-03-29 04:07:44.933762 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-29 04:07:47.681024 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-29 04:07:47.681102 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-29 04:07:47.681110 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-29 04:07:47.681115 | orchestrator | 2026-03-29 04:07:47.681121 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-29 04:07:48.703470 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:48.703560 | orchestrator | 2026-03-29 04:07:48.703572 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-29 04:07:49.718528 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:49.718632 | orchestrator | 2026-03-29 04:07:49.718645 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-29 04:07:50.702955 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:50.703071 | orchestrator | 2026-03-29 04:07:50.703078 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-29 04:07:50.779813 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-29 04:07:50.779897 | orchestrator | 2026-03-29 04:07:50.779908 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-29 04:07:50.841400 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:50.841502 | orchestrator | 2026-03-29 04:07:50.841518 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-29 04:07:51.795836 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-29 04:07:51.795963 | orchestrator | 2026-03-29 04:07:51.796045 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-29 04:07:51.910776 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-29 04:07:51.910864 | orchestrator | 2026-03-29 04:07:51.910875 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-29 04:07:52.934146 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:52.934251 | orchestrator | 2026-03-29 04:07:52.934263 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-29 04:07:54.023722 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:54.023811 | orchestrator | 2026-03-29 04:07:54.023823 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-29 04:07:54.108159 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:07:54.108235 | orchestrator | 2026-03-29 04:07:54.108250 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-29 04:07:54.169291 | orchestrator | ok: [testbed-manager] 2026-03-29 04:07:54.169365 | orchestrator | 2026-03-29 04:07:54.169374 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-29 04:07:55.500233 | orchestrator | changed: [testbed-manager] 2026-03-29 04:07:55.500322 | orchestrator | 2026-03-29 04:07:55.500334 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-29 04:09:02.232952 | orchestrator | changed: [testbed-manager] 2026-03-29 04:09:02.233101 | orchestrator | 2026-03-29 04:09:02.233116 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-29 04:09:03.537931 | orchestrator | ok: [testbed-manager] 2026-03-29 04:09:03.538006 | orchestrator | 2026-03-29 04:09:03.538013 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-29 04:09:03.607843 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:09:03.607920 | orchestrator | 2026-03-29 04:09:03.607928 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-29 04:09:04.473573 | orchestrator | ok: [testbed-manager] 2026-03-29 04:09:04.473690 | orchestrator | 2026-03-29 04:09:04.473708 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-29 04:09:04.548777 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:09:04.548884 | orchestrator | 2026-03-29 04:09:04.548897 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 04:09:04.548910 | orchestrator | 2026-03-29 04:09:04.548926 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-29 04:09:23.243178 | orchestrator | changed: [testbed-manager] 2026-03-29 04:09:23.243260 | orchestrator | 2026-03-29 04:09:23.243269 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-29 04:10:23.314295 | orchestrator | Pausing for 60 seconds 2026-03-29 04:10:23.314404 | orchestrator | changed: [testbed-manager] 2026-03-29 04:10:23.314411 | orchestrator | 2026-03-29 04:10:23.314416 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-29 04:10:23.372585 | orchestrator | ok: [testbed-manager] 2026-03-29 04:10:23.372695 | orchestrator | 2026-03-29 04:10:23.372711 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-29 04:10:27.381677 | orchestrator | changed: [testbed-manager] 2026-03-29 04:10:27.381771 | orchestrator | 2026-03-29 04:10:27.381784 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-29 04:11:30.108884 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-29 04:11:30.108976 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-29 04:11:30.108984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-29 04:11:30.108992 | orchestrator | changed: [testbed-manager] 2026-03-29 04:11:30.109000 | orchestrator | 2026-03-29 04:11:30.109007 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-29 04:11:41.333601 | orchestrator | changed: [testbed-manager] 2026-03-29 04:11:41.333681 | orchestrator | 2026-03-29 04:11:41.333690 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-29 04:11:41.417277 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-29 04:11:41.417369 | orchestrator | 2026-03-29 04:11:41.417376 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 04:11:41.417381 | orchestrator | 2026-03-29 04:11:41.417385 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-29 04:11:41.476581 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:11:41.476650 | orchestrator | 2026-03-29 04:11:41.476656 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-29 04:11:41.560107 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-29 04:11:41.560178 | orchestrator | 2026-03-29 04:11:41.560201 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-29 04:11:42.684797 | orchestrator | changed: [testbed-manager] 2026-03-29 04:11:42.684900 | orchestrator | 2026-03-29 04:11:42.684918 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-29 04:11:46.241677 | orchestrator | ok: [testbed-manager] 2026-03-29 04:11:46.241805 | orchestrator | 2026-03-29 04:11:46.241834 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-29 04:11:46.327361 | orchestrator | ok: [testbed-manager] => { 2026-03-29 04:11:46.327455 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-29 04:11:46.327469 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-29 04:11:46.327480 | orchestrator | "Checking running containers against expected versions...", 2026-03-29 04:11:46.327490 | orchestrator | "", 2026-03-29 04:11:46.327501 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-29 04:11:46.327510 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-29 04:11:46.327521 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327530 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-29 04:11:46.327540 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327549 | orchestrator | "", 2026-03-29 04:11:46.327559 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-29 04:11:46.327569 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-29 04:11:46.327578 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327587 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-29 04:11:46.327597 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327606 | orchestrator | "", 2026-03-29 04:11:46.327616 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-29 04:11:46.327625 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-29 04:11:46.327635 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327644 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-29 04:11:46.327654 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327663 | orchestrator | "", 2026-03-29 04:11:46.327672 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-29 04:11:46.327682 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-29 04:11:46.327691 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327700 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-29 04:11:46.327710 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327719 | orchestrator | "", 2026-03-29 04:11:46.327729 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-29 04:11:46.327739 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-29 04:11:46.327748 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327757 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-29 04:11:46.327767 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327776 | orchestrator | "", 2026-03-29 04:11:46.327785 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-29 04:11:46.327816 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.327826 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327836 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.327845 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327857 | orchestrator | "", 2026-03-29 04:11:46.327868 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-29 04:11:46.327879 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 04:11:46.327889 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327900 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 04:11:46.327911 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.327922 | orchestrator | "", 2026-03-29 04:11:46.327933 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-29 04:11:46.327944 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 04:11:46.327956 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.327975 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 04:11:46.327992 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328009 | orchestrator | "", 2026-03-29 04:11:46.328026 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-29 04:11:46.328043 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-29 04:11:46.328060 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328076 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-29 04:11:46.328094 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328110 | orchestrator | "", 2026-03-29 04:11:46.328132 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-29 04:11:46.328150 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 04:11:46.328168 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328187 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 04:11:46.328205 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328221 | orchestrator | "", 2026-03-29 04:11:46.328232 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-29 04:11:46.328243 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328341 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328356 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328365 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328375 | orchestrator | "", 2026-03-29 04:11:46.328384 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-29 04:11:46.328394 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328404 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328413 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328422 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328432 | orchestrator | "", 2026-03-29 04:11:46.328441 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-29 04:11:46.328451 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328461 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328470 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328479 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328489 | orchestrator | "", 2026-03-29 04:11:46.328498 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-29 04:11:46.328508 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328518 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328527 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328556 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328567 | orchestrator | "", 2026-03-29 04:11:46.328577 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-29 04:11:46.328586 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328606 | orchestrator | " Enabled: true", 2026-03-29 04:11:46.328616 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-29 04:11:46.328626 | orchestrator | " Status: ✅ MATCH", 2026-03-29 04:11:46.328635 | orchestrator | "", 2026-03-29 04:11:46.328645 | orchestrator | "=== Summary ===", 2026-03-29 04:11:46.328655 | orchestrator | "Errors (version mismatches): 0", 2026-03-29 04:11:46.328664 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-29 04:11:46.328674 | orchestrator | "", 2026-03-29 04:11:46.328684 | orchestrator | "✅ All running containers match expected versions!" 2026-03-29 04:11:46.328694 | orchestrator | ] 2026-03-29 04:11:46.328704 | orchestrator | } 2026-03-29 04:11:46.328714 | orchestrator | 2026-03-29 04:11:46.328724 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-29 04:11:46.393409 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:11:46.393512 | orchestrator | 2026-03-29 04:11:46.393528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:11:46.393541 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-29 04:11:46.393552 | orchestrator | 2026-03-29 04:11:59.026813 | orchestrator | 2026-03-29 04:11:59 | INFO  | Task 8eb174ee-987d-4996-9f61-112064153c69 (sync inventory) is running in background. Output coming soon. 2026-03-29 04:12:28.577069 | orchestrator | 2026-03-29 04:12:00 | INFO  | Starting group_vars file reorganization 2026-03-29 04:12:28.577155 | orchestrator | 2026-03-29 04:12:00 | INFO  | Moved 0 file(s) to their respective directories 2026-03-29 04:12:28.577175 | orchestrator | 2026-03-29 04:12:00 | INFO  | Group_vars file reorganization completed 2026-03-29 04:12:28.577180 | orchestrator | 2026-03-29 04:12:03 | INFO  | Starting variable preparation from inventory 2026-03-29 04:12:28.577185 | orchestrator | 2026-03-29 04:12:06 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-29 04:12:28.577196 | orchestrator | 2026-03-29 04:12:06 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-29 04:12:28.577201 | orchestrator | 2026-03-29 04:12:06 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-29 04:12:28.577205 | orchestrator | 2026-03-29 04:12:06 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-29 04:12:28.577209 | orchestrator | 2026-03-29 04:12:06 | INFO  | Variable preparation completed 2026-03-29 04:12:28.577213 | orchestrator | 2026-03-29 04:12:08 | INFO  | Starting inventory overwrite handling 2026-03-29 04:12:28.577216 | orchestrator | 2026-03-29 04:12:08 | INFO  | Handling group overwrites in 99-overwrite 2026-03-29 04:12:28.577220 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removing group frr:children from 60-generic 2026-03-29 04:12:28.577224 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-29 04:12:28.577228 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-29 04:12:28.577232 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-29 04:12:28.577236 | orchestrator | 2026-03-29 04:12:08 | INFO  | Handling group overwrites in 20-roles 2026-03-29 04:12:28.577240 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-29 04:12:28.577243 | orchestrator | 2026-03-29 04:12:08 | INFO  | Removed 5 group(s) in total 2026-03-29 04:12:28.577247 | orchestrator | 2026-03-29 04:12:08 | INFO  | Inventory overwrite handling completed 2026-03-29 04:12:28.577251 | orchestrator | 2026-03-29 04:12:09 | INFO  | Starting merge of inventory files 2026-03-29 04:12:28.577255 | orchestrator | 2026-03-29 04:12:09 | INFO  | Inventory files merged successfully 2026-03-29 04:12:28.577274 | orchestrator | 2026-03-29 04:12:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-29 04:12:28.577278 | orchestrator | 2026-03-29 04:12:27 | INFO  | Successfully wrote ClusterShell configuration 2026-03-29 04:12:28.900878 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 04:12:28.900975 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 04:12:28.900991 | orchestrator | + local max_attempts=60 2026-03-29 04:12:28.901005 | orchestrator | + local name=kolla-ansible 2026-03-29 04:12:28.901017 | orchestrator | + local attempt_num=1 2026-03-29 04:12:28.901508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 04:12:28.937261 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 04:12:28.937367 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 04:12:28.937375 | orchestrator | + local max_attempts=60 2026-03-29 04:12:28.937382 | orchestrator | + local name=osism-ansible 2026-03-29 04:12:28.937387 | orchestrator | + local attempt_num=1 2026-03-29 04:12:28.937501 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 04:12:28.972791 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 04:12:28.972870 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-29 04:12:29.182236 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-29 04:12:29.182388 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-29 04:12:29.182404 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-29 04:12:29.182428 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-29 04:12:29.182438 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-29 04:12:29.182446 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-29 04:12:29.182453 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-29 04:12:29.182460 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-29 04:12:29.182468 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 16 seconds ago 2026-03-29 04:12:29.182832 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-29 04:12:29.182849 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-29 04:12:29.182857 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-29 04:12:29.182864 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-29 04:12:29.182890 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-29 04:12:29.182899 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-29 04:12:29.182911 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-29 04:12:29.189086 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-29 04:12:29.189155 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-29 04:12:29.189163 | orchestrator | + osism apply facts 2026-03-29 04:12:41.359558 | orchestrator | 2026-03-29 04:12:41 | INFO  | Task ae8b0bde-e580-40c2-a8a8-2479a82beb3b (facts) was prepared for execution. 2026-03-29 04:12:41.359633 | orchestrator | 2026-03-29 04:12:41 | INFO  | It takes a moment until task ae8b0bde-e580-40c2-a8a8-2479a82beb3b (facts) has been started and output is visible here. 2026-03-29 04:13:00.713695 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-29 04:13:00.713799 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-29 04:13:00.713820 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-29 04:13:00.713827 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-29 04:13:00.713840 | orchestrator | 2026-03-29 04:13:00.713847 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 04:13:00.713854 | orchestrator | 2026-03-29 04:13:00.713860 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 04:13:00.713866 | orchestrator | Sunday 29 March 2026 04:12:47 +0000 (0:00:01.780) 0:00:01.780 ********** 2026-03-29 04:13:00.713873 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:00.713880 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:13:00.713886 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:13:00.713892 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:13:00.713899 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:13:00.713905 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:13:00.713911 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:13:00.713917 | orchestrator | 2026-03-29 04:13:00.713942 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 04:13:00.713949 | orchestrator | Sunday 29 March 2026 04:12:49 +0000 (0:00:02.216) 0:00:03.997 ********** 2026-03-29 04:13:00.713956 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:13:00.713961 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:13:00.713965 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:13:00.713969 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:13:00.713973 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:13:00.713977 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:13:00.713980 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:13:00.713984 | orchestrator | 2026-03-29 04:13:00.713988 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 04:13:00.713992 | orchestrator | 2026-03-29 04:13:00.713996 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 04:13:00.714000 | orchestrator | Sunday 29 March 2026 04:12:51 +0000 (0:00:01.833) 0:00:05.830 ********** 2026-03-29 04:13:00.714003 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:13:00.714007 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:13:00.714011 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:00.714058 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:13:00.714079 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:13:00.714083 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:13:00.714086 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:13:00.714090 | orchestrator | 2026-03-29 04:13:00.714094 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 04:13:00.714098 | orchestrator | 2026-03-29 04:13:00.714102 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 04:13:00.714106 | orchestrator | Sunday 29 March 2026 04:12:58 +0000 (0:00:06.807) 0:00:12.638 ********** 2026-03-29 04:13:00.714110 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:13:00.714113 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:13:00.714117 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:13:00.714121 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:13:00.714125 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:13:00.714128 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:13:00.714132 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:13:00.714136 | orchestrator | 2026-03-29 04:13:00.714139 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:13:00.714143 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714149 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714152 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714156 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714160 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714164 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714167 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:00.714171 | orchestrator | 2026-03-29 04:13:00.714175 | orchestrator | 2026-03-29 04:13:00.714179 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:13:00.714183 | orchestrator | Sunday 29 March 2026 04:13:00 +0000 (0:00:01.770) 0:00:14.408 ********** 2026-03-29 04:13:00.714187 | orchestrator | =============================================================================== 2026-03-29 04:13:00.714190 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.81s 2026-03-29 04:13:00.714194 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.22s 2026-03-29 04:13:00.714198 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.83s 2026-03-29 04:13:00.714202 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.77s 2026-03-29 04:13:01.022468 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-29 04:13:01.122834 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 04:13:01.123700 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-29 04:13:01.162720 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-29 04:13:01.162815 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-29 04:13:01.170867 | orchestrator | + set -e 2026-03-29 04:13:01.170956 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-29 04:13:01.170970 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-29 04:13:01.182602 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-29 04:13:01.193005 | orchestrator | 2026-03-29 04:13:01.193086 | orchestrator | # UPGRADE SERVICES 2026-03-29 04:13:01.193119 | orchestrator | 2026-03-29 04:13:01.193127 | orchestrator | + set -e 2026-03-29 04:13:01.193133 | orchestrator | + echo 2026-03-29 04:13:01.193139 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-29 04:13:01.193145 | orchestrator | + echo 2026-03-29 04:13:01.193151 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 04:13:01.194344 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 04:13:01.194399 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 04:13:01.194406 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 04:13:01.194410 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 04:13:01.194415 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 04:13:01.194420 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 04:13:01.194424 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 04:13:01.194428 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 04:13:01.194433 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 04:13:01.194437 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 04:13:01.194441 | orchestrator | ++ export ARA=false 2026-03-29 04:13:01.194445 | orchestrator | ++ ARA=false 2026-03-29 04:13:01.194449 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 04:13:01.194453 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 04:13:01.194456 | orchestrator | ++ export TEMPEST=false 2026-03-29 04:13:01.194460 | orchestrator | ++ TEMPEST=false 2026-03-29 04:13:01.194464 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 04:13:01.194468 | orchestrator | ++ IS_ZUUL=true 2026-03-29 04:13:01.194472 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:13:01.194476 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:13:01.194480 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 04:13:01.194484 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 04:13:01.194500 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 04:13:01.194505 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 04:13:01.194508 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 04:13:01.194512 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 04:13:01.194516 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 04:13:01.194523 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 04:13:01.194527 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-29 04:13:01.194530 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-29 04:13:01.194722 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-29 04:13:01.194734 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-29 04:13:01.194740 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-29 04:13:01.203261 | orchestrator | + set -e 2026-03-29 04:13:01.203357 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:13:01.204088 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:13:01.204131 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:13:01.204138 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:13:01.204144 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:13:01.204284 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 04:13:01.204296 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 04:13:01.204301 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 04:13:01.204306 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 04:13:01.204336 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 04:13:01.204343 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 04:13:01.204350 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 04:13:01.204356 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 04:13:01.204363 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 04:13:01.204370 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 04:13:01.204376 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 04:13:01.204382 | orchestrator | ++ export ARA=false 2026-03-29 04:13:01.204395 | orchestrator | ++ ARA=false 2026-03-29 04:13:01.204399 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 04:13:01.204403 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 04:13:01.204444 | orchestrator | ++ export TEMPEST=false 2026-03-29 04:13:01.204450 | orchestrator | ++ TEMPEST=false 2026-03-29 04:13:01.204464 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 04:13:01.204471 | orchestrator | ++ IS_ZUUL=true 2026-03-29 04:13:01.204479 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:13:01.204485 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.84 2026-03-29 04:13:01.204492 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 04:13:01.204504 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 04:13:01.204511 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 04:13:01.204517 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 04:13:01.204524 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 04:13:01.204564 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 04:13:01.204570 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 04:13:01.204574 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 04:13:01.204600 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-29 04:13:01.204607 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-29 04:13:01.204837 | orchestrator | 2026-03-29 04:13:01.204846 | orchestrator | # PULL IMAGES 2026-03-29 04:13:01.204850 | orchestrator | 2026-03-29 04:13:01.204854 | orchestrator | + echo 2026-03-29 04:13:01.204859 | orchestrator | + echo '# PULL IMAGES' 2026-03-29 04:13:01.204865 | orchestrator | + echo 2026-03-29 04:13:01.206372 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 04:13:01.273852 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 04:13:01.273936 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-29 04:13:03.242170 | orchestrator | 2026-03-29 04:13:03 | INFO  | Trying to run play pull-images in environment custom 2026-03-29 04:13:13.382543 | orchestrator | 2026-03-29 04:13:13 | INFO  | Task 1ff23563-5b34-4e39-ab25-febcdc63d6b1 (pull-images) was prepared for execution. 2026-03-29 04:13:13.385427 | orchestrator | 2026-03-29 04:13:13 | INFO  | Task 1ff23563-5b34-4e39-ab25-febcdc63d6b1 is running in background. No more output. Check ARA for logs. 2026-03-29 04:13:13.725416 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-29 04:13:13.731687 | orchestrator | + set -e 2026-03-29 04:13:13.731765 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:13:13.731776 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:13:13.731783 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:13:13.731790 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:13:13.731796 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:13:13.731803 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 04:13:13.733217 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 04:13:13.741185 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:13:13.741255 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-29 04:13:13.741265 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-29 04:13:13.784273 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 04:13:13.784436 | orchestrator | + osism apply frr 2026-03-29 04:13:26.129129 | orchestrator | 2026-03-29 04:13:26 | INFO  | Task 3b2e2919-a9d1-4c94-b910-637e8ba79d09 (frr) was prepared for execution. 2026-03-29 04:13:26.129229 | orchestrator | 2026-03-29 04:13:26 | INFO  | It takes a moment until task 3b2e2919-a9d1-4c94-b910-637e8ba79d09 (frr) has been started and output is visible here. 2026-03-29 04:13:58.259785 | orchestrator | 2026-03-29 04:13:58.259895 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-29 04:13:58.259907 | orchestrator | 2026-03-29 04:13:58.259912 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-29 04:13:58.259917 | orchestrator | Sunday 29 March 2026 04:13:33 +0000 (0:00:02.960) 0:00:02.960 ********** 2026-03-29 04:13:58.259922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 04:13:58.259928 | orchestrator | 2026-03-29 04:13:58.259932 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-29 04:13:58.259936 | orchestrator | Sunday 29 March 2026 04:13:36 +0000 (0:00:02.361) 0:00:05.322 ********** 2026-03-29 04:13:58.259940 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.259945 | orchestrator | 2026-03-29 04:13:58.259949 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-29 04:13:58.259954 | orchestrator | Sunday 29 March 2026 04:13:38 +0000 (0:00:01.997) 0:00:07.319 ********** 2026-03-29 04:13:58.259959 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.259965 | orchestrator | 2026-03-29 04:13:58.259971 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-29 04:13:58.259977 | orchestrator | Sunday 29 March 2026 04:13:41 +0000 (0:00:02.999) 0:00:10.319 ********** 2026-03-29 04:13:58.259983 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.259989 | orchestrator | 2026-03-29 04:13:58.259995 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-29 04:13:58.260001 | orchestrator | Sunday 29 March 2026 04:13:43 +0000 (0:00:01.912) 0:00:12.232 ********** 2026-03-29 04:13:58.260006 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.260032 | orchestrator | 2026-03-29 04:13:58.260038 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-29 04:13:58.260044 | orchestrator | Sunday 29 March 2026 04:13:44 +0000 (0:00:01.885) 0:00:14.117 ********** 2026-03-29 04:13:58.260050 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.260055 | orchestrator | 2026-03-29 04:13:58.260061 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-29 04:13:58.260068 | orchestrator | Sunday 29 March 2026 04:13:47 +0000 (0:00:02.367) 0:00:16.485 ********** 2026-03-29 04:13:58.260074 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:13:58.260081 | orchestrator | 2026-03-29 04:13:58.260088 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-29 04:13:58.260095 | orchestrator | Sunday 29 March 2026 04:13:48 +0000 (0:00:01.133) 0:00:17.618 ********** 2026-03-29 04:13:58.260100 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:13:58.260107 | orchestrator | 2026-03-29 04:13:58.260113 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-29 04:13:58.260120 | orchestrator | Sunday 29 March 2026 04:13:49 +0000 (0:00:01.203) 0:00:18.822 ********** 2026-03-29 04:13:58.260126 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.260133 | orchestrator | 2026-03-29 04:13:58.260154 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-29 04:13:58.260159 | orchestrator | Sunday 29 March 2026 04:13:51 +0000 (0:00:01.928) 0:00:20.751 ********** 2026-03-29 04:13:58.260163 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-29 04:13:58.260168 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-29 04:13:58.260173 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-29 04:13:58.260178 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-29 04:13:58.260182 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-29 04:13:58.260186 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-29 04:13:58.260190 | orchestrator | 2026-03-29 04:13:58.260194 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-29 04:13:58.260198 | orchestrator | Sunday 29 March 2026 04:13:55 +0000 (0:00:03.680) 0:00:24.431 ********** 2026-03-29 04:13:58.260202 | orchestrator | ok: [testbed-manager] 2026-03-29 04:13:58.260206 | orchestrator | 2026-03-29 04:13:58.260210 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:13:58.260214 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 04:13:58.260218 | orchestrator | 2026-03-29 04:13:58.260222 | orchestrator | 2026-03-29 04:13:58.260226 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:13:58.260229 | orchestrator | Sunday 29 March 2026 04:13:57 +0000 (0:00:02.645) 0:00:27.077 ********** 2026-03-29 04:13:58.260233 | orchestrator | =============================================================================== 2026-03-29 04:13:58.260237 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.68s 2026-03-29 04:13:58.260241 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.00s 2026-03-29 04:13:58.260245 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.65s 2026-03-29 04:13:58.260249 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.37s 2026-03-29 04:13:58.260253 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.36s 2026-03-29 04:13:58.260256 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.00s 2026-03-29 04:13:58.260260 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.93s 2026-03-29 04:13:58.260269 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.91s 2026-03-29 04:13:58.260289 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.89s 2026-03-29 04:13:58.260293 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.20s 2026-03-29 04:13:58.260297 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.13s 2026-03-29 04:13:58.566271 | orchestrator | + osism apply kubernetes 2026-03-29 04:14:00.853804 | orchestrator | 2026-03-29 04:14:00 | INFO  | Task 71f8ea6f-00be-4c7b-8211-0c956597cbb3 (kubernetes) was prepared for execution. 2026-03-29 04:14:00.853876 | orchestrator | 2026-03-29 04:14:00 | INFO  | It takes a moment until task 71f8ea6f-00be-4c7b-8211-0c956597cbb3 (kubernetes) has been started and output is visible here. 2026-03-29 04:14:45.890085 | orchestrator | 2026-03-29 04:14:45.890226 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-29 04:14:45.890253 | orchestrator | 2026-03-29 04:14:45.890272 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-29 04:14:45.890293 | orchestrator | Sunday 29 March 2026 04:14:08 +0000 (0:00:02.919) 0:00:02.919 ********** 2026-03-29 04:14:45.890313 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.890332 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.890352 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.890371 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.890506 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.890534 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.890552 | orchestrator | 2026-03-29 04:14:45.890565 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-29 04:14:45.890578 | orchestrator | Sunday 29 March 2026 04:14:12 +0000 (0:00:04.072) 0:00:06.991 ********** 2026-03-29 04:14:45.890591 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.890604 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.890617 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.890629 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.890642 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.890655 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.890667 | orchestrator | 2026-03-29 04:14:45.890680 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-29 04:14:45.890693 | orchestrator | Sunday 29 March 2026 04:14:14 +0000 (0:00:01.962) 0:00:08.954 ********** 2026-03-29 04:14:45.890706 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.890719 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.890731 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.890744 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.890756 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.890769 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.890781 | orchestrator | 2026-03-29 04:14:45.890794 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-29 04:14:45.890806 | orchestrator | Sunday 29 March 2026 04:14:16 +0000 (0:00:02.196) 0:00:11.151 ********** 2026-03-29 04:14:45.890818 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.890831 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.890844 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.890856 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.890870 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.890882 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.890894 | orchestrator | 2026-03-29 04:14:45.890905 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-29 04:14:45.890916 | orchestrator | Sunday 29 March 2026 04:14:19 +0000 (0:00:02.736) 0:00:13.888 ********** 2026-03-29 04:14:45.890926 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.890937 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.890948 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.890959 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.890997 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.891009 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.891020 | orchestrator | 2026-03-29 04:14:45.891030 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-29 04:14:45.891041 | orchestrator | Sunday 29 March 2026 04:14:21 +0000 (0:00:02.322) 0:00:16.211 ********** 2026-03-29 04:14:45.891052 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.891063 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.891073 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.891084 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.891094 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.891106 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.891116 | orchestrator | 2026-03-29 04:14:45.891127 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-29 04:14:45.891138 | orchestrator | Sunday 29 March 2026 04:14:23 +0000 (0:00:02.119) 0:00:18.331 ********** 2026-03-29 04:14:45.891149 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.891160 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.891170 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.891181 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.891192 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.891202 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.891213 | orchestrator | 2026-03-29 04:14:45.891236 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-29 04:14:45.891248 | orchestrator | Sunday 29 March 2026 04:14:25 +0000 (0:00:02.005) 0:00:20.337 ********** 2026-03-29 04:14:45.891259 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.891269 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.891280 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.891291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.891302 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.891312 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.891323 | orchestrator | 2026-03-29 04:14:45.891334 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-29 04:14:45.891344 | orchestrator | Sunday 29 March 2026 04:14:27 +0000 (0:00:01.975) 0:00:22.313 ********** 2026-03-29 04:14:45.891355 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891366 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891377 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.891425 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891444 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891462 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.891481 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891500 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891517 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.891534 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891546 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891556 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.891591 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891603 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891614 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.891624 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 04:14:45.891635 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 04:14:45.891646 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.891656 | orchestrator | 2026-03-29 04:14:45.891667 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-29 04:14:45.891688 | orchestrator | Sunday 29 March 2026 04:14:29 +0000 (0:00:01.980) 0:00:24.293 ********** 2026-03-29 04:14:45.891699 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.891710 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.891720 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.891731 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.891742 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.891753 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.891763 | orchestrator | 2026-03-29 04:14:45.891774 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-29 04:14:45.891786 | orchestrator | Sunday 29 March 2026 04:14:31 +0000 (0:00:02.233) 0:00:26.527 ********** 2026-03-29 04:14:45.891797 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.891808 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.891818 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.891829 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.891840 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.891850 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.891861 | orchestrator | 2026-03-29 04:14:45.891872 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-29 04:14:45.891882 | orchestrator | Sunday 29 March 2026 04:14:34 +0000 (0:00:02.590) 0:00:29.117 ********** 2026-03-29 04:14:45.891893 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:14:45.891904 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:14:45.891914 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:14:45.891925 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:14:45.891935 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:14:45.891946 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:14:45.891956 | orchestrator | 2026-03-29 04:14:45.891967 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-29 04:14:45.891978 | orchestrator | Sunday 29 March 2026 04:14:37 +0000 (0:00:02.893) 0:00:32.011 ********** 2026-03-29 04:14:45.891989 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.892000 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.892010 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.892021 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.892032 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.892043 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.892053 | orchestrator | 2026-03-29 04:14:45.892064 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-29 04:14:45.892075 | orchestrator | Sunday 29 March 2026 04:14:39 +0000 (0:00:02.027) 0:00:34.039 ********** 2026-03-29 04:14:45.892085 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.892096 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.892107 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.892117 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.892128 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.892138 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.892149 | orchestrator | 2026-03-29 04:14:45.892160 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-29 04:14:45.892172 | orchestrator | Sunday 29 March 2026 04:14:41 +0000 (0:00:02.099) 0:00:36.138 ********** 2026-03-29 04:14:45.892183 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.892199 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.892210 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.892221 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.892231 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.892242 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.892252 | orchestrator | 2026-03-29 04:14:45.892263 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-29 04:14:45.892274 | orchestrator | Sunday 29 March 2026 04:14:43 +0000 (0:00:01.803) 0:00:37.942 ********** 2026-03-29 04:14:45.892291 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-29 04:14:45.892302 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-29 04:14:45.892312 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.892323 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-29 04:14:45.892333 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-29 04:14:45.892344 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.892355 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-29 04:14:45.892365 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-29 04:14:45.892376 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:14:45.892411 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-29 04:14:45.892432 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-29 04:14:45.892451 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:14:45.892463 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-29 04:14:45.892473 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-29 04:14:45.892484 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:14:45.892494 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-29 04:14:45.892505 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-29 04:14:45.892516 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:14:45.892527 | orchestrator | 2026-03-29 04:14:45.892562 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-29 04:14:45.892574 | orchestrator | Sunday 29 March 2026 04:14:45 +0000 (0:00:02.062) 0:00:40.005 ********** 2026-03-29 04:14:45.892584 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:14:45.892595 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:14:45.892613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:16:23.705065 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.705180 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.705196 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.705208 | orchestrator | 2026-03-29 04:16:23.705220 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-29 04:16:23.705232 | orchestrator | Sunday 29 March 2026 04:14:47 +0000 (0:00:01.887) 0:00:41.892 ********** 2026-03-29 04:16:23.705244 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:16:23.705254 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:16:23.705266 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:16:23.705276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.705286 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.705296 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.705306 | orchestrator | 2026-03-29 04:16:23.705335 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-29 04:16:23.705346 | orchestrator | 2026-03-29 04:16:23.705356 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-29 04:16:23.705368 | orchestrator | Sunday 29 March 2026 04:14:49 +0000 (0:00:02.482) 0:00:44.374 ********** 2026-03-29 04:16:23.705379 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.705390 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.705400 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.705410 | orchestrator | 2026-03-29 04:16:23.705419 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-29 04:16:23.705435 | orchestrator | Sunday 29 March 2026 04:14:51 +0000 (0:00:01.817) 0:00:46.192 ********** 2026-03-29 04:16:23.705506 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.705517 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.705528 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.705539 | orchestrator | 2026-03-29 04:16:23.705550 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-29 04:16:23.705559 | orchestrator | Sunday 29 March 2026 04:14:53 +0000 (0:00:02.096) 0:00:48.288 ********** 2026-03-29 04:16:23.705593 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.705607 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:16:23.705621 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:16:23.705633 | orchestrator | 2026-03-29 04:16:23.705644 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-29 04:16:23.705657 | orchestrator | Sunday 29 March 2026 04:14:55 +0000 (0:00:02.206) 0:00:50.494 ********** 2026-03-29 04:16:23.705668 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.705679 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.705689 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.705700 | orchestrator | 2026-03-29 04:16:23.705712 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-29 04:16:23.705723 | orchestrator | Sunday 29 March 2026 04:14:57 +0000 (0:00:02.007) 0:00:52.502 ********** 2026-03-29 04:16:23.705735 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.705748 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.705760 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.705771 | orchestrator | 2026-03-29 04:16:23.705782 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-29 04:16:23.705793 | orchestrator | Sunday 29 March 2026 04:14:59 +0000 (0:00:01.383) 0:00:53.886 ********** 2026-03-29 04:16:23.705804 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.705815 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.705825 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.705835 | orchestrator | 2026-03-29 04:16:23.705846 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-29 04:16:23.705856 | orchestrator | Sunday 29 March 2026 04:15:01 +0000 (0:00:01.758) 0:00:55.644 ********** 2026-03-29 04:16:23.705867 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.705878 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.705889 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.705912 | orchestrator | 2026-03-29 04:16:23.705924 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-29 04:16:23.705935 | orchestrator | Sunday 29 March 2026 04:15:03 +0000 (0:00:02.237) 0:00:57.881 ********** 2026-03-29 04:16:23.705947 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:16:23.705960 | orchestrator | 2026-03-29 04:16:23.705971 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-29 04:16:23.705982 | orchestrator | Sunday 29 March 2026 04:15:04 +0000 (0:00:01.681) 0:00:59.563 ********** 2026-03-29 04:16:23.705993 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706003 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.706072 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.706086 | orchestrator | 2026-03-29 04:16:23.706097 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-29 04:16:23.706107 | orchestrator | Sunday 29 March 2026 04:15:07 +0000 (0:00:02.459) 0:01:02.022 ********** 2026-03-29 04:16:23.706117 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706128 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706137 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706147 | orchestrator | 2026-03-29 04:16:23.706158 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-29 04:16:23.706167 | orchestrator | Sunday 29 March 2026 04:15:09 +0000 (0:00:01.765) 0:01:03.788 ********** 2026-03-29 04:16:23.706177 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706186 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706196 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.706205 | orchestrator | 2026-03-29 04:16:23.706215 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-29 04:16:23.706225 | orchestrator | Sunday 29 March 2026 04:15:11 +0000 (0:00:01.801) 0:01:05.590 ********** 2026-03-29 04:16:23.706235 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706245 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706255 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.706277 | orchestrator | 2026-03-29 04:16:23.706288 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-29 04:16:23.706297 | orchestrator | Sunday 29 March 2026 04:15:13 +0000 (0:00:02.437) 0:01:08.027 ********** 2026-03-29 04:16:23.706307 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.706317 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706354 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706365 | orchestrator | 2026-03-29 04:16:23.706375 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-29 04:16:23.706385 | orchestrator | Sunday 29 March 2026 04:15:14 +0000 (0:00:01.361) 0:01:09.389 ********** 2026-03-29 04:16:23.706394 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.706404 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706414 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706424 | orchestrator | 2026-03-29 04:16:23.706434 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-29 04:16:23.706507 | orchestrator | Sunday 29 March 2026 04:15:16 +0000 (0:00:01.629) 0:01:11.018 ********** 2026-03-29 04:16:23.706519 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.706528 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:16:23.706538 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:16:23.706548 | orchestrator | 2026-03-29 04:16:23.706559 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-29 04:16:23.706570 | orchestrator | Sunday 29 March 2026 04:15:18 +0000 (0:00:02.133) 0:01:13.152 ********** 2026-03-29 04:16:23.706582 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706593 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.706603 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.706613 | orchestrator | 2026-03-29 04:16:23.706623 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-29 04:16:23.706632 | orchestrator | Sunday 29 March 2026 04:15:20 +0000 (0:00:01.898) 0:01:15.050 ********** 2026-03-29 04:16:23.706642 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706651 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.706661 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.706670 | orchestrator | 2026-03-29 04:16:23.706680 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-29 04:16:23.706691 | orchestrator | Sunday 29 March 2026 04:15:21 +0000 (0:00:01.367) 0:01:16.418 ********** 2026-03-29 04:16:23.706701 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 04:16:23.706714 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 04:16:23.706723 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 04:16:23.706733 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 04:16:23.706744 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 04:16:23.706754 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 04:16:23.706764 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706774 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.706783 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.706793 | orchestrator | 2026-03-29 04:16:23.706803 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-29 04:16:23.706814 | orchestrator | Sunday 29 March 2026 04:15:45 +0000 (0:00:23.205) 0:01:39.623 ********** 2026-03-29 04:16:23.706824 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:16:23.706834 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:16:23.706858 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:16:23.706870 | orchestrator | 2026-03-29 04:16:23.706880 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-29 04:16:23.706890 | orchestrator | Sunday 29 March 2026 04:15:46 +0000 (0:00:01.353) 0:01:40.977 ********** 2026-03-29 04:16:23.706900 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.706910 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:16:23.706921 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:16:23.706931 | orchestrator | 2026-03-29 04:16:23.706941 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-29 04:16:23.706950 | orchestrator | Sunday 29 March 2026 04:15:48 +0000 (0:00:02.149) 0:01:43.127 ********** 2026-03-29 04:16:23.706960 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.706969 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.706980 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.706990 | orchestrator | 2026-03-29 04:16:23.707001 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-29 04:16:23.707012 | orchestrator | Sunday 29 March 2026 04:15:51 +0000 (0:00:02.553) 0:01:45.680 ********** 2026-03-29 04:16:23.707021 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:16:23.707031 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.707056 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:16:23.707067 | orchestrator | 2026-03-29 04:16:23.707076 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-29 04:16:23.707087 | orchestrator | Sunday 29 March 2026 04:16:18 +0000 (0:00:27.076) 0:02:12.757 ********** 2026-03-29 04:16:23.707096 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.707107 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.707116 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.707126 | orchestrator | 2026-03-29 04:16:23.707136 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-29 04:16:23.707146 | orchestrator | Sunday 29 March 2026 04:16:19 +0000 (0:00:01.706) 0:02:14.464 ********** 2026-03-29 04:16:23.707156 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:16:23.707166 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:16:23.707176 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:16:23.707186 | orchestrator | 2026-03-29 04:16:23.707197 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-29 04:16:23.707207 | orchestrator | Sunday 29 March 2026 04:16:21 +0000 (0:00:01.813) 0:02:16.277 ********** 2026-03-29 04:16:23.707217 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:16:23.707227 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:16:23.707238 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:16:23.707249 | orchestrator | 2026-03-29 04:16:23.707275 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-29 04:17:14.287764 | orchestrator | Sunday 29 March 2026 04:16:23 +0000 (0:00:01.988) 0:02:18.266 ********** 2026-03-29 04:17:14.287844 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:17:14.287851 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:17:14.287855 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:17:14.287859 | orchestrator | 2026-03-29 04:17:14.287864 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-29 04:17:14.287868 | orchestrator | Sunday 29 March 2026 04:16:25 +0000 (0:00:01.738) 0:02:20.005 ********** 2026-03-29 04:17:14.287872 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:17:14.287876 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:17:14.287880 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:17:14.287883 | orchestrator | 2026-03-29 04:17:14.287888 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-29 04:17:14.287892 | orchestrator | Sunday 29 March 2026 04:16:26 +0000 (0:00:01.514) 0:02:21.520 ********** 2026-03-29 04:17:14.287896 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:17:14.287901 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:17:14.287905 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:17:14.287909 | orchestrator | 2026-03-29 04:17:14.287913 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-29 04:17:14.287933 | orchestrator | Sunday 29 March 2026 04:16:28 +0000 (0:00:01.883) 0:02:23.404 ********** 2026-03-29 04:17:14.287949 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:17:14.287952 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:17:14.287956 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:17:14.287960 | orchestrator | 2026-03-29 04:17:14.287964 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-29 04:17:14.287967 | orchestrator | Sunday 29 March 2026 04:16:30 +0000 (0:00:02.068) 0:02:25.472 ********** 2026-03-29 04:17:14.287971 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:17:14.287975 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:17:14.287979 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:17:14.287983 | orchestrator | 2026-03-29 04:17:14.287986 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-29 04:17:14.287990 | orchestrator | Sunday 29 March 2026 04:16:32 +0000 (0:00:01.828) 0:02:27.301 ********** 2026-03-29 04:17:14.287994 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:17:14.287998 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:17:14.288001 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:17:14.288005 | orchestrator | 2026-03-29 04:17:14.288009 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-29 04:17:14.288012 | orchestrator | Sunday 29 March 2026 04:16:34 +0000 (0:00:02.045) 0:02:29.346 ********** 2026-03-29 04:17:14.288016 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:17:14.288020 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:17:14.288024 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:17:14.288027 | orchestrator | 2026-03-29 04:17:14.288031 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-29 04:17:14.288035 | orchestrator | Sunday 29 March 2026 04:16:36 +0000 (0:00:01.636) 0:02:30.983 ********** 2026-03-29 04:17:14.288038 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:17:14.288042 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:17:14.288046 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:17:14.288050 | orchestrator | 2026-03-29 04:17:14.288054 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-29 04:17:14.288057 | orchestrator | Sunday 29 March 2026 04:16:37 +0000 (0:00:01.394) 0:02:32.378 ********** 2026-03-29 04:17:14.288061 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:17:14.288065 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:17:14.288069 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:17:14.288072 | orchestrator | 2026-03-29 04:17:14.288076 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-29 04:17:14.288080 | orchestrator | Sunday 29 March 2026 04:16:39 +0000 (0:00:01.679) 0:02:34.057 ********** 2026-03-29 04:17:14.288083 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:17:14.288087 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:17:14.288091 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:17:14.288095 | orchestrator | 2026-03-29 04:17:14.288099 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-29 04:17:14.288104 | orchestrator | Sunday 29 March 2026 04:16:41 +0000 (0:00:01.749) 0:02:35.806 ********** 2026-03-29 04:17:14.288108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 04:17:14.288113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 04:17:14.288116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 04:17:14.288120 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 04:17:14.288124 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 04:17:14.288128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 04:17:14.288136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 04:17:14.288140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 04:17:14.288144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-29 04:17:14.288148 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 04:17:14.288151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-29 04:17:14.288155 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 04:17:14.288169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 04:17:14.288173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 04:17:14.288177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 04:17:14.288181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 04:17:14.288185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 04:17:14.288188 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 04:17:14.288192 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 04:17:14.288196 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 04:17:14.288200 | orchestrator | 2026-03-29 04:17:14.288203 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-29 04:17:14.288207 | orchestrator | 2026-03-29 04:17:14.288211 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-29 04:17:14.288215 | orchestrator | Sunday 29 March 2026 04:16:45 +0000 (0:00:04.548) 0:02:40.354 ********** 2026-03-29 04:17:14.288219 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288223 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288227 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288230 | orchestrator | 2026-03-29 04:17:14.288234 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-29 04:17:14.288238 | orchestrator | Sunday 29 March 2026 04:16:47 +0000 (0:00:01.411) 0:02:41.766 ********** 2026-03-29 04:17:14.288242 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288246 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288249 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288253 | orchestrator | 2026-03-29 04:17:14.288257 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-29 04:17:14.288261 | orchestrator | Sunday 29 March 2026 04:16:49 +0000 (0:00:01.829) 0:02:43.596 ********** 2026-03-29 04:17:14.288264 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288268 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288272 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288275 | orchestrator | 2026-03-29 04:17:14.288279 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-29 04:17:14.288283 | orchestrator | Sunday 29 March 2026 04:16:50 +0000 (0:00:01.490) 0:02:45.086 ********** 2026-03-29 04:17:14.288287 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:17:14.288291 | orchestrator | 2026-03-29 04:17:14.288294 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-29 04:17:14.288298 | orchestrator | Sunday 29 March 2026 04:16:52 +0000 (0:00:01.753) 0:02:46.840 ********** 2026-03-29 04:17:14.288302 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:17:14.288306 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:17:14.288309 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:17:14.288316 | orchestrator | 2026-03-29 04:17:14.288320 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-29 04:17:14.288324 | orchestrator | Sunday 29 March 2026 04:16:53 +0000 (0:00:01.594) 0:02:48.434 ********** 2026-03-29 04:17:14.288328 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:17:14.288331 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:17:14.288344 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:17:14.288349 | orchestrator | 2026-03-29 04:17:14.288360 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-29 04:17:14.288364 | orchestrator | Sunday 29 March 2026 04:16:55 +0000 (0:00:01.476) 0:02:49.911 ********** 2026-03-29 04:17:14.288369 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:17:14.288377 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:17:14.288381 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:17:14.288386 | orchestrator | 2026-03-29 04:17:14.288390 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-29 04:17:14.288394 | orchestrator | Sunday 29 March 2026 04:16:56 +0000 (0:00:01.380) 0:02:51.291 ********** 2026-03-29 04:17:14.288399 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288403 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288407 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288412 | orchestrator | 2026-03-29 04:17:14.288416 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-29 04:17:14.288421 | orchestrator | Sunday 29 March 2026 04:16:58 +0000 (0:00:01.852) 0:02:53.144 ********** 2026-03-29 04:17:14.288425 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288430 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288434 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288438 | orchestrator | 2026-03-29 04:17:14.288443 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-29 04:17:14.288447 | orchestrator | Sunday 29 March 2026 04:17:01 +0000 (0:00:02.599) 0:02:55.744 ********** 2026-03-29 04:17:14.288451 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:17:14.288456 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:17:14.288460 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:17:14.288483 | orchestrator | 2026-03-29 04:17:14.288490 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-29 04:17:14.288496 | orchestrator | Sunday 29 March 2026 04:17:03 +0000 (0:00:02.333) 0:02:58.078 ********** 2026-03-29 04:17:14.288503 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:17:14.288509 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:17:14.288516 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:17:14.288522 | orchestrator | 2026-03-29 04:17:14.288529 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 04:17:14.288536 | orchestrator | 2026-03-29 04:17:14.288543 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 04:17:14.288547 | orchestrator | Sunday 29 March 2026 04:17:12 +0000 (0:00:08.598) 0:03:06.676 ********** 2026-03-29 04:17:14.288551 | orchestrator | ok: [testbed-manager] 2026-03-29 04:17:14.288555 | orchestrator | 2026-03-29 04:17:14.288558 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 04:17:14.288566 | orchestrator | Sunday 29 March 2026 04:17:14 +0000 (0:00:02.175) 0:03:08.851 ********** 2026-03-29 04:18:25.242288 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242398 | orchestrator | 2026-03-29 04:18:25.242412 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 04:18:25.242420 | orchestrator | Sunday 29 March 2026 04:17:15 +0000 (0:00:01.497) 0:03:10.348 ********** 2026-03-29 04:18:25.242428 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 04:18:25.242434 | orchestrator | 2026-03-29 04:18:25.242439 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 04:18:25.242445 | orchestrator | Sunday 29 March 2026 04:17:17 +0000 (0:00:01.571) 0:03:11.919 ********** 2026-03-29 04:18:25.242452 | orchestrator | changed: [testbed-manager] 2026-03-29 04:18:25.242457 | orchestrator | 2026-03-29 04:18:25.242486 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 04:18:25.242544 | orchestrator | Sunday 29 March 2026 04:17:19 +0000 (0:00:02.003) 0:03:13.923 ********** 2026-03-29 04:18:25.242550 | orchestrator | changed: [testbed-manager] 2026-03-29 04:18:25.242557 | orchestrator | 2026-03-29 04:18:25.242563 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 04:18:25.242582 | orchestrator | Sunday 29 March 2026 04:17:20 +0000 (0:00:01.588) 0:03:15.511 ********** 2026-03-29 04:18:25.242589 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 04:18:25.242594 | orchestrator | 2026-03-29 04:18:25.242600 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 04:18:25.242606 | orchestrator | Sunday 29 March 2026 04:17:23 +0000 (0:00:03.022) 0:03:18.533 ********** 2026-03-29 04:18:25.242612 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 04:18:25.242618 | orchestrator | 2026-03-29 04:18:25.242624 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 04:18:25.242630 | orchestrator | Sunday 29 March 2026 04:17:25 +0000 (0:00:01.932) 0:03:20.466 ********** 2026-03-29 04:18:25.242636 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242642 | orchestrator | 2026-03-29 04:18:25.242648 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 04:18:25.242654 | orchestrator | Sunday 29 March 2026 04:17:27 +0000 (0:00:01.465) 0:03:21.931 ********** 2026-03-29 04:18:25.242661 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242667 | orchestrator | 2026-03-29 04:18:25.242673 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-29 04:18:25.242679 | orchestrator | 2026-03-29 04:18:25.242685 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-29 04:18:25.242694 | orchestrator | Sunday 29 March 2026 04:17:29 +0000 (0:00:01.772) 0:03:23.704 ********** 2026-03-29 04:18:25.242703 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242709 | orchestrator | 2026-03-29 04:18:25.242715 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-29 04:18:25.242721 | orchestrator | Sunday 29 March 2026 04:17:30 +0000 (0:00:01.147) 0:03:24.852 ********** 2026-03-29 04:18:25.242727 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 04:18:25.242734 | orchestrator | 2026-03-29 04:18:25.242740 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-29 04:18:25.242746 | orchestrator | Sunday 29 March 2026 04:17:31 +0000 (0:00:01.568) 0:03:26.421 ********** 2026-03-29 04:18:25.242752 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242759 | orchestrator | 2026-03-29 04:18:25.242764 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-29 04:18:25.242768 | orchestrator | Sunday 29 March 2026 04:17:33 +0000 (0:00:01.841) 0:03:28.262 ********** 2026-03-29 04:18:25.242772 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242776 | orchestrator | 2026-03-29 04:18:25.242779 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-29 04:18:25.242783 | orchestrator | Sunday 29 March 2026 04:17:36 +0000 (0:00:02.698) 0:03:30.961 ********** 2026-03-29 04:18:25.242787 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242791 | orchestrator | 2026-03-29 04:18:25.242795 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-29 04:18:25.242798 | orchestrator | Sunday 29 March 2026 04:17:37 +0000 (0:00:01.504) 0:03:32.466 ********** 2026-03-29 04:18:25.242802 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242806 | orchestrator | 2026-03-29 04:18:25.242810 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-29 04:18:25.242814 | orchestrator | Sunday 29 March 2026 04:17:39 +0000 (0:00:01.557) 0:03:34.023 ********** 2026-03-29 04:18:25.242818 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242822 | orchestrator | 2026-03-29 04:18:25.242828 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-29 04:18:25.242842 | orchestrator | Sunday 29 March 2026 04:17:41 +0000 (0:00:01.670) 0:03:35.693 ********** 2026-03-29 04:18:25.242848 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242854 | orchestrator | 2026-03-29 04:18:25.242860 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-29 04:18:25.242866 | orchestrator | Sunday 29 March 2026 04:17:43 +0000 (0:00:02.494) 0:03:38.188 ********** 2026-03-29 04:18:25.242872 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:25.242878 | orchestrator | 2026-03-29 04:18:25.242886 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-29 04:18:25.242892 | orchestrator | 2026-03-29 04:18:25.242897 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-29 04:18:25.242904 | orchestrator | Sunday 29 March 2026 04:17:45 +0000 (0:00:01.654) 0:03:39.842 ********** 2026-03-29 04:18:25.242913 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:18:25.242920 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:18:25.242926 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:18:25.242932 | orchestrator | 2026-03-29 04:18:25.242937 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-29 04:18:25.242943 | orchestrator | Sunday 29 March 2026 04:17:47 +0000 (0:00:01.743) 0:03:41.586 ********** 2026-03-29 04:18:25.242949 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:25.242954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:18:25.242960 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:18:25.242966 | orchestrator | 2026-03-29 04:18:25.242990 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-29 04:18:25.242995 | orchestrator | Sunday 29 March 2026 04:17:48 +0000 (0:00:01.376) 0:03:42.963 ********** 2026-03-29 04:18:25.242999 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:18:25.243003 | orchestrator | 2026-03-29 04:18:25.243007 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-29 04:18:25.243011 | orchestrator | Sunday 29 March 2026 04:17:50 +0000 (0:00:01.813) 0:03:44.777 ********** 2026-03-29 04:18:25.243014 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243018 | orchestrator | 2026-03-29 04:18:25.243022 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-29 04:18:25.243025 | orchestrator | Sunday 29 March 2026 04:17:52 +0000 (0:00:01.891) 0:03:46.669 ********** 2026-03-29 04:18:25.243029 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243033 | orchestrator | 2026-03-29 04:18:25.243037 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-29 04:18:25.243040 | orchestrator | Sunday 29 March 2026 04:17:54 +0000 (0:00:01.922) 0:03:48.591 ********** 2026-03-29 04:18:25.243044 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:25.243048 | orchestrator | 2026-03-29 04:18:25.243052 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-29 04:18:25.243056 | orchestrator | Sunday 29 March 2026 04:17:55 +0000 (0:00:01.221) 0:03:49.813 ********** 2026-03-29 04:18:25.243060 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243063 | orchestrator | 2026-03-29 04:18:25.243067 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-29 04:18:25.243071 | orchestrator | Sunday 29 March 2026 04:17:57 +0000 (0:00:02.033) 0:03:51.846 ********** 2026-03-29 04:18:25.243075 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243078 | orchestrator | 2026-03-29 04:18:25.243082 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-29 04:18:25.243086 | orchestrator | Sunday 29 March 2026 04:17:59 +0000 (0:00:02.362) 0:03:54.208 ********** 2026-03-29 04:18:25.243090 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243094 | orchestrator | 2026-03-29 04:18:25.243098 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-29 04:18:25.243102 | orchestrator | Sunday 29 March 2026 04:18:00 +0000 (0:00:01.153) 0:03:55.362 ********** 2026-03-29 04:18:25.243113 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243120 | orchestrator | 2026-03-29 04:18:25.243129 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-29 04:18:25.243134 | orchestrator | Sunday 29 March 2026 04:18:01 +0000 (0:00:01.167) 0:03:56.530 ********** 2026-03-29 04:18:25.243140 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-29 04:18:25.243146 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-29 04:18:25.243154 | orchestrator | } 2026-03-29 04:18:25.243160 | orchestrator | 2026-03-29 04:18:25.243167 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-29 04:18:25.243172 | orchestrator | Sunday 29 March 2026 04:18:03 +0000 (0:00:01.163) 0:03:57.694 ********** 2026-03-29 04:18:25.243179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:25.243185 | orchestrator | 2026-03-29 04:18:25.243191 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-29 04:18:25.243197 | orchestrator | Sunday 29 March 2026 04:18:04 +0000 (0:00:01.124) 0:03:58.819 ********** 2026-03-29 04:18:25.243203 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-29 04:18:25.243209 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-29 04:18:25.243216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-29 04:18:25.243223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-29 04:18:25.243229 | orchestrator | 2026-03-29 04:18:25.243236 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-29 04:18:25.243242 | orchestrator | Sunday 29 March 2026 04:18:09 +0000 (0:00:05.614) 0:04:04.433 ********** 2026-03-29 04:18:25.243249 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243255 | orchestrator | 2026-03-29 04:18:25.243270 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-29 04:18:25.243276 | orchestrator | Sunday 29 March 2026 04:18:12 +0000 (0:00:02.580) 0:04:07.014 ********** 2026-03-29 04:18:25.243282 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243288 | orchestrator | 2026-03-29 04:18:25.243293 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-29 04:18:25.243299 | orchestrator | Sunday 29 March 2026 04:18:15 +0000 (0:00:02.672) 0:04:09.686 ********** 2026-03-29 04:18:25.243305 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 04:18:25.243310 | orchestrator | 2026-03-29 04:18:25.243317 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-29 04:18:25.243323 | orchestrator | Sunday 29 March 2026 04:18:19 +0000 (0:00:04.677) 0:04:14.364 ********** 2026-03-29 04:18:25.243329 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:25.243335 | orchestrator | 2026-03-29 04:18:25.243341 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-29 04:18:25.243346 | orchestrator | Sunday 29 March 2026 04:18:20 +0000 (0:00:01.155) 0:04:15.519 ********** 2026-03-29 04:18:25.243352 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-29 04:18:25.243359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-29 04:18:25.243364 | orchestrator | 2026-03-29 04:18:25.243370 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-29 04:18:25.243376 | orchestrator | Sunday 29 March 2026 04:18:23 +0000 (0:00:02.844) 0:04:18.364 ********** 2026-03-29 04:18:25.243382 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:25.243396 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:18:52.729462 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:18:52.729606 | orchestrator | 2026-03-29 04:18:52.729621 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-29 04:18:52.729631 | orchestrator | Sunday 29 March 2026 04:18:25 +0000 (0:00:01.444) 0:04:19.808 ********** 2026-03-29 04:18:52.729660 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:18:52.729668 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:18:52.729674 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:18:52.729679 | orchestrator | 2026-03-29 04:18:52.729687 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-29 04:18:52.729702 | orchestrator | 2026-03-29 04:18:52.729709 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-29 04:18:52.729715 | orchestrator | Sunday 29 March 2026 04:18:27 +0000 (0:00:02.333) 0:04:22.142 ********** 2026-03-29 04:18:52.729723 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:52.729731 | orchestrator | 2026-03-29 04:18:52.729737 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-29 04:18:52.729743 | orchestrator | Sunday 29 March 2026 04:18:28 +0000 (0:00:01.178) 0:04:23.320 ********** 2026-03-29 04:18:52.729766 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 04:18:52.729773 | orchestrator | 2026-03-29 04:18:52.729779 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-29 04:18:52.729785 | orchestrator | Sunday 29 March 2026 04:18:30 +0000 (0:00:01.528) 0:04:24.849 ********** 2026-03-29 04:18:52.729792 | orchestrator | ok: [testbed-manager] 2026-03-29 04:18:52.729798 | orchestrator | 2026-03-29 04:18:52.729804 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-29 04:18:52.729811 | orchestrator | 2026-03-29 04:18:52.729817 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-29 04:18:52.729823 | orchestrator | Sunday 29 March 2026 04:18:35 +0000 (0:00:05.472) 0:04:30.322 ********** 2026-03-29 04:18:52.729827 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:18:52.729830 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:18:52.729834 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:18:52.729838 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:18:52.729842 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:18:52.729845 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:18:52.729849 | orchestrator | 2026-03-29 04:18:52.729853 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-29 04:18:52.729857 | orchestrator | Sunday 29 March 2026 04:18:37 +0000 (0:00:02.095) 0:04:32.417 ********** 2026-03-29 04:18:52.729861 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 04:18:52.729865 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 04:18:52.729869 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 04:18:52.729873 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 04:18:52.729876 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 04:18:52.729881 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 04:18:52.729884 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 04:18:52.729888 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 04:18:52.729892 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 04:18:52.729896 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 04:18:52.729900 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 04:18:52.729904 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 04:18:52.729907 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 04:18:52.729911 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 04:18:52.729915 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 04:18:52.729923 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 04:18:52.729927 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 04:18:52.729931 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 04:18:52.729934 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 04:18:52.729938 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 04:18:52.729942 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 04:18:52.729945 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 04:18:52.729949 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 04:18:52.729953 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 04:18:52.729957 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 04:18:52.729961 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 04:18:52.729979 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 04:18:52.729983 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 04:18:52.729987 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 04:18:52.729990 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 04:18:52.729994 | orchestrator | 2026-03-29 04:18:52.729998 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-29 04:18:52.730001 | orchestrator | Sunday 29 March 2026 04:18:48 +0000 (0:00:10.263) 0:04:42.681 ********** 2026-03-29 04:18:52.730006 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:18:52.730011 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:18:52.730057 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:18:52.730070 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:52.730074 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:18:52.730079 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:18:52.730089 | orchestrator | 2026-03-29 04:18:52.730094 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-29 04:18:52.730098 | orchestrator | Sunday 29 March 2026 04:18:50 +0000 (0:00:01.978) 0:04:44.659 ********** 2026-03-29 04:18:52.730103 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:18:52.730107 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:18:52.730112 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:18:52.730116 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:18:52.730121 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:18:52.730125 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:18:52.730130 | orchestrator | 2026-03-29 04:18:52.730134 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:18:52.730139 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 04:18:52.730147 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 04:18:52.730152 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 04:18:52.730155 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 04:18:52.730162 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 04:18:52.730175 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 04:18:52.730184 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 04:18:52.730190 | orchestrator | 2026-03-29 04:18:52.730196 | orchestrator | 2026-03-29 04:18:52.730202 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:18:52.730208 | orchestrator | Sunday 29 March 2026 04:18:52 +0000 (0:00:02.615) 0:04:47.274 ********** 2026-03-29 04:18:52.730214 | orchestrator | =============================================================================== 2026-03-29 04:18:52.730220 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.08s 2026-03-29 04:18:52.730226 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.21s 2026-03-29 04:18:52.730234 | orchestrator | Manage labels ---------------------------------------------------------- 10.26s 2026-03-29 04:18:52.730240 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.60s 2026-03-29 04:18:52.730246 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.61s 2026-03-29 04:18:52.730252 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.47s 2026-03-29 04:18:52.730257 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.68s 2026-03-29 04:18:52.730263 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.55s 2026-03-29 04:18:52.730269 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.07s 2026-03-29 04:18:52.730276 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.02s 2026-03-29 04:18:52.730282 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.89s 2026-03-29 04:18:52.730288 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.84s 2026-03-29 04:18:52.730295 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.74s 2026-03-29 04:18:52.730301 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.70s 2026-03-29 04:18:52.730308 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.67s 2026-03-29 04:18:52.730315 | orchestrator | Manage taints ----------------------------------------------------------- 2.62s 2026-03-29 04:18:52.730321 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 2.60s 2026-03-29 04:18:52.730329 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 2.59s 2026-03-29 04:18:52.730339 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.58s 2026-03-29 04:18:53.269374 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 2.55s 2026-03-29 04:18:53.612942 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-29 04:18:53.613013 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-29 04:18:53.618759 | orchestrator | + set -e 2026-03-29 04:18:53.618855 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:18:53.618897 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:18:53.618911 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:18:53.618924 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:18:53.618936 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:18:53.618947 | orchestrator | + osism apply openstackclient 2026-03-29 04:19:05.894396 | orchestrator | 2026-03-29 04:19:05 | INFO  | Task f8cd1836-1fe8-409e-a22d-b2bded3a0e19 (openstackclient) was prepared for execution. 2026-03-29 04:19:05.894494 | orchestrator | 2026-03-29 04:19:05 | INFO  | It takes a moment until task f8cd1836-1fe8-409e-a22d-b2bded3a0e19 (openstackclient) has been started and output is visible here. 2026-03-29 04:19:42.074323 | orchestrator | 2026-03-29 04:19:42.074476 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-29 04:19:42.074502 | orchestrator | 2026-03-29 04:19:42.074513 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-29 04:19:42.074567 | orchestrator | Sunday 29 March 2026 04:19:12 +0000 (0:00:02.080) 0:00:02.080 ********** 2026-03-29 04:19:42.074580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-29 04:19:42.074590 | orchestrator | 2026-03-29 04:19:42.074599 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-29 04:19:42.074608 | orchestrator | Sunday 29 March 2026 04:19:14 +0000 (0:00:01.825) 0:00:03.906 ********** 2026-03-29 04:19:42.074617 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-29 04:19:42.074627 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-29 04:19:42.074636 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-29 04:19:42.074646 | orchestrator | 2026-03-29 04:19:42.074654 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-29 04:19:42.074663 | orchestrator | Sunday 29 March 2026 04:19:16 +0000 (0:00:02.398) 0:00:06.305 ********** 2026-03-29 04:19:42.074672 | orchestrator | changed: [testbed-manager] 2026-03-29 04:19:42.074681 | orchestrator | 2026-03-29 04:19:42.074690 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-29 04:19:42.074699 | orchestrator | Sunday 29 March 2026 04:19:19 +0000 (0:00:02.395) 0:00:08.701 ********** 2026-03-29 04:19:42.074711 | orchestrator | ok: [testbed-manager] 2026-03-29 04:19:42.074727 | orchestrator | 2026-03-29 04:19:42.074741 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-29 04:19:42.074755 | orchestrator | Sunday 29 March 2026 04:19:21 +0000 (0:00:02.054) 0:00:10.756 ********** 2026-03-29 04:19:42.074769 | orchestrator | ok: [testbed-manager] 2026-03-29 04:19:42.074794 | orchestrator | 2026-03-29 04:19:42.074807 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-29 04:19:42.074821 | orchestrator | Sunday 29 March 2026 04:19:23 +0000 (0:00:01.976) 0:00:12.733 ********** 2026-03-29 04:19:42.074834 | orchestrator | ok: [testbed-manager] 2026-03-29 04:19:42.074847 | orchestrator | 2026-03-29 04:19:42.074861 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-29 04:19:42.074875 | orchestrator | Sunday 29 March 2026 04:19:24 +0000 (0:00:01.459) 0:00:14.192 ********** 2026-03-29 04:19:42.074889 | orchestrator | changed: [testbed-manager] 2026-03-29 04:19:42.074902 | orchestrator | 2026-03-29 04:19:42.074916 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-29 04:19:42.074931 | orchestrator | Sunday 29 March 2026 04:19:36 +0000 (0:00:11.507) 0:00:25.700 ********** 2026-03-29 04:19:42.074945 | orchestrator | changed: [testbed-manager] 2026-03-29 04:19:42.074960 | orchestrator | 2026-03-29 04:19:42.074975 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-29 04:19:42.074990 | orchestrator | Sunday 29 March 2026 04:19:38 +0000 (0:00:02.057) 0:00:27.757 ********** 2026-03-29 04:19:42.075004 | orchestrator | changed: [testbed-manager] 2026-03-29 04:19:42.075019 | orchestrator | 2026-03-29 04:19:42.075034 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-29 04:19:42.075049 | orchestrator | Sunday 29 March 2026 04:19:39 +0000 (0:00:01.589) 0:00:29.346 ********** 2026-03-29 04:19:42.075063 | orchestrator | ok: [testbed-manager] 2026-03-29 04:19:42.075079 | orchestrator | 2026-03-29 04:19:42.075089 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:19:42.075099 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 04:19:42.075109 | orchestrator | 2026-03-29 04:19:42.075143 | orchestrator | 2026-03-29 04:19:42.075152 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:19:42.075161 | orchestrator | Sunday 29 March 2026 04:19:41 +0000 (0:00:01.882) 0:00:31.229 ********** 2026-03-29 04:19:42.075170 | orchestrator | =============================================================================== 2026-03-29 04:19:42.075179 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.51s 2026-03-29 04:19:42.075188 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.40s 2026-03-29 04:19:42.075197 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.40s 2026-03-29 04:19:42.075206 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.06s 2026-03-29 04:19:42.075215 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.06s 2026-03-29 04:19:42.075223 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.98s 2026-03-29 04:19:42.075232 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.88s 2026-03-29 04:19:42.075241 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.83s 2026-03-29 04:19:42.075250 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.59s 2026-03-29 04:19:42.075258 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.46s 2026-03-29 04:19:42.438055 | orchestrator | + osism apply -a upgrade common 2026-03-29 04:19:44.560333 | orchestrator | 2026-03-29 04:19:44 | INFO  | Task e6746610-67a5-4c67-a4d1-6ff0a3dac518 (common) was prepared for execution. 2026-03-29 04:19:44.560417 | orchestrator | 2026-03-29 04:19:44 | INFO  | It takes a moment until task e6746610-67a5-4c67-a4d1-6ff0a3dac518 (common) has been started and output is visible here. 2026-03-29 04:20:01.563870 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-29 04:20:01.563991 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-29 04:20:01.564016 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-29 04:20:01.564025 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-29 04:20:01.564043 | orchestrator | 2026-03-29 04:20:01.564053 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-29 04:20:01.564062 | orchestrator | 2026-03-29 04:20:01.564070 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 04:20:01.564079 | orchestrator | Sunday 29 March 2026 04:19:50 +0000 (0:00:01.831) 0:00:01.831 ********** 2026-03-29 04:20:01.564088 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:20:01.564103 | orchestrator | 2026-03-29 04:20:01.564118 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-29 04:20:01.564131 | orchestrator | Sunday 29 March 2026 04:19:53 +0000 (0:00:02.200) 0:00:04.031 ********** 2026-03-29 04:20:01.564145 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564160 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564172 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564184 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564197 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564211 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564226 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564268 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564284 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564298 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564312 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:20:01.564327 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564341 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564355 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564369 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564381 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564395 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564408 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:20:01.564421 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564435 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564448 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:20:01.564464 | orchestrator | 2026-03-29 04:20:01.564479 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 04:20:01.564493 | orchestrator | Sunday 29 March 2026 04:19:56 +0000 (0:00:03.103) 0:00:07.134 ********** 2026-03-29 04:20:01.564509 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:20:01.564640 | orchestrator | 2026-03-29 04:20:01.564665 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-29 04:20:01.564681 | orchestrator | Sunday 29 March 2026 04:19:58 +0000 (0:00:02.396) 0:00:09.531 ********** 2026-03-29 04:20:01.564702 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.564758 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.564778 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.564796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.564828 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.565036 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:01.565067 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:01.565088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:01.565125 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:03.495236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495378 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495390 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495402 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495427 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495438 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495470 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495606 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495623 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:03.495640 | orchestrator | 2026-03-29 04:20:03.495656 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-29 04:20:03.495672 | orchestrator | Sunday 29 March 2026 04:20:02 +0000 (0:00:04.057) 0:00:13.589 ********** 2026-03-29 04:20:03.495691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:03.495710 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:03.495727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:03.495768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:04.384276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:04.384287 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:04.384298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:04.384330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384368 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:04.384392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384401 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:04.384409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:04.384418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:04.384426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:04.384434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:04.384484 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:05.912279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912371 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:05.912381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912386 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:05.912390 | orchestrator | 2026-03-29 04:20:05.912395 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-29 04:20:05.912400 | orchestrator | Sunday 29 March 2026 04:20:04 +0000 (0:00:01.769) 0:00:15.358 ********** 2026-03-29 04:20:05.912406 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:05.912412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:05.912423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:05.912469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:05.912482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912488 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:05.912493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:05.912518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:05.912640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308441 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:15.308519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:15.308525 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:15.308555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308569 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:15.308577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:15.308587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:15.308638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308644 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:15.308650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:15.308664 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:15.308671 | orchestrator | 2026-03-29 04:20:15.308678 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-29 04:20:15.308687 | orchestrator | Sunday 29 March 2026 04:20:07 +0000 (0:00:02.723) 0:00:18.081 ********** 2026-03-29 04:20:15.308694 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:15.308710 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:15.308714 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:15.308718 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:15.308737 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:15.308741 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:15.308744 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:15.308748 | orchestrator | 2026-03-29 04:20:15.308752 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-29 04:20:15.308756 | orchestrator | Sunday 29 March 2026 04:20:08 +0000 (0:00:01.079) 0:00:19.161 ********** 2026-03-29 04:20:15.308760 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:15.308763 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:15.308767 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:15.308771 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:15.308774 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:15.308778 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:15.308782 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:15.308786 | orchestrator | 2026-03-29 04:20:15.308789 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-29 04:20:15.308793 | orchestrator | Sunday 29 March 2026 04:20:09 +0000 (0:00:00.974) 0:00:20.135 ********** 2026-03-29 04:20:15.308797 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:15.308801 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:15.308810 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:15.308814 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:15.308817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:15.308821 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:15.308825 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:15.308829 | orchestrator | 2026-03-29 04:20:15.308832 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-29 04:20:15.308836 | orchestrator | Sunday 29 March 2026 04:20:09 +0000 (0:00:00.763) 0:00:20.899 ********** 2026-03-29 04:20:15.308840 | orchestrator | changed: [testbed-manager] 2026-03-29 04:20:15.308844 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:20:15.308847 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:20:15.308851 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:20:15.308855 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:20:15.308858 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:20:15.308862 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:20:15.308866 | orchestrator | 2026-03-29 04:20:15.308870 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-29 04:20:15.308873 | orchestrator | Sunday 29 March 2026 04:20:12 +0000 (0:00:02.096) 0:00:22.996 ********** 2026-03-29 04:20:15.308878 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:15.308886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:15.308890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:15.308894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:15.308902 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:16.225283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:16.225310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:16.225330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225370 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:16.225717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:29.772169 | orchestrator | 2026-03-29 04:20:29.772235 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-29 04:20:29.772241 | orchestrator | Sunday 29 March 2026 04:20:16 +0000 (0:00:04.209) 0:00:27.205 ********** 2026-03-29 04:20:29.772246 | orchestrator | [WARNING]: Skipped 2026-03-29 04:20:29.772251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-29 04:20:29.772256 | orchestrator | to this access issue: 2026-03-29 04:20:29.772261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-29 04:20:29.772265 | orchestrator | directory 2026-03-29 04:20:29.772269 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:20:29.772274 | orchestrator | 2026-03-29 04:20:29.772278 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-29 04:20:29.772282 | orchestrator | Sunday 29 March 2026 04:20:17 +0000 (0:00:01.250) 0:00:28.456 ********** 2026-03-29 04:20:29.772285 | orchestrator | [WARNING]: Skipped 2026-03-29 04:20:29.772289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-29 04:20:29.772293 | orchestrator | to this access issue: 2026-03-29 04:20:29.772297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-29 04:20:29.772300 | orchestrator | directory 2026-03-29 04:20:29.772304 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:20:29.772308 | orchestrator | 2026-03-29 04:20:29.772311 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-29 04:20:29.772315 | orchestrator | Sunday 29 March 2026 04:20:18 +0000 (0:00:00.972) 0:00:29.428 ********** 2026-03-29 04:20:29.772319 | orchestrator | [WARNING]: Skipped 2026-03-29 04:20:29.772323 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-29 04:20:29.772327 | orchestrator | to this access issue: 2026-03-29 04:20:29.772331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-29 04:20:29.772334 | orchestrator | directory 2026-03-29 04:20:29.772338 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:20:29.772342 | orchestrator | 2026-03-29 04:20:29.772345 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-29 04:20:29.772349 | orchestrator | Sunday 29 March 2026 04:20:19 +0000 (0:00:00.964) 0:00:30.393 ********** 2026-03-29 04:20:29.772353 | orchestrator | [WARNING]: Skipped 2026-03-29 04:20:29.772361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-29 04:20:29.772365 | orchestrator | to this access issue: 2026-03-29 04:20:29.772369 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-29 04:20:29.772372 | orchestrator | directory 2026-03-29 04:20:29.772376 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:20:29.772380 | orchestrator | 2026-03-29 04:20:29.772384 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-29 04:20:29.772387 | orchestrator | Sunday 29 March 2026 04:20:20 +0000 (0:00:00.883) 0:00:31.276 ********** 2026-03-29 04:20:29.772391 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:20:29.772406 | orchestrator | changed: [testbed-manager] 2026-03-29 04:20:29.772410 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:20:29.772414 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:20:29.772417 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:20:29.772421 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:20:29.772425 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:20:29.772429 | orchestrator | 2026-03-29 04:20:29.772432 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-29 04:20:29.772436 | orchestrator | Sunday 29 March 2026 04:20:23 +0000 (0:00:03.355) 0:00:34.631 ********** 2026-03-29 04:20:29.772440 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772444 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772448 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772452 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772456 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772459 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772463 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:20:29.772467 | orchestrator | 2026-03-29 04:20:29.772471 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-29 04:20:29.772474 | orchestrator | Sunday 29 March 2026 04:20:25 +0000 (0:00:02.270) 0:00:36.901 ********** 2026-03-29 04:20:29.772478 | orchestrator | ok: [testbed-manager] 2026-03-29 04:20:29.772482 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:20:29.772486 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:20:29.772489 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:20:29.772493 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:20:29.772497 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:20:29.772501 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:20:29.772504 | orchestrator | 2026-03-29 04:20:29.772508 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-29 04:20:29.772512 | orchestrator | Sunday 29 March 2026 04:20:27 +0000 (0:00:01.891) 0:00:38.793 ********** 2026-03-29 04:20:29.772525 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:29.772531 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:29.772579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:29.772589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:29.772594 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:29.772600 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:29.772605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:29.772612 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:34.247948 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:34.248035 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:34.248055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:34.248063 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:34.248070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:34.248077 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248096 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:34.248103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:34.248115 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248137 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:34.248144 | orchestrator | 2026-03-29 04:20:34.248152 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-29 04:20:34.248160 | orchestrator | Sunday 29 March 2026 04:20:29 +0000 (0:00:02.085) 0:00:40.878 ********** 2026-03-29 04:20:34.248167 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248175 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248186 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248193 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248199 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248206 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248213 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:20:34.248219 | orchestrator | 2026-03-29 04:20:34.248226 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-29 04:20:34.248232 | orchestrator | Sunday 29 March 2026 04:20:32 +0000 (0:00:02.180) 0:00:43.059 ********** 2026-03-29 04:20:34.248239 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:34.248246 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:34.248253 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:34.248260 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:34.248266 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:34.248277 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:36.721071 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:20:36.721157 | orchestrator | 2026-03-29 04:20:36.721165 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-29 04:20:36.721171 | orchestrator | Sunday 29 March 2026 04:20:34 +0000 (0:00:02.164) 0:00:45.223 ********** 2026-03-29 04:20:36.721176 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:20:36.721228 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:36.721264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140874 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:20:39.140988 | orchestrator | 2026-03-29 04:20:39.140992 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-29 04:20:39.140997 | orchestrator | Sunday 29 March 2026 04:20:37 +0000 (0:00:03.352) 0:00:48.575 ********** 2026-03-29 04:20:39.141015 | orchestrator | changed: [testbed-manager] => { 2026-03-29 04:20:39.141020 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141024 | orchestrator | } 2026-03-29 04:20:39.141028 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:20:39.141031 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141035 | orchestrator | } 2026-03-29 04:20:39.141039 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:20:39.141043 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141046 | orchestrator | } 2026-03-29 04:20:39.141050 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:20:39.141054 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141058 | orchestrator | } 2026-03-29 04:20:39.141061 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 04:20:39.141065 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141069 | orchestrator | } 2026-03-29 04:20:39.141072 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 04:20:39.141076 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141080 | orchestrator | } 2026-03-29 04:20:39.141084 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 04:20:39.141087 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:20:39.141091 | orchestrator | } 2026-03-29 04:20:39.141095 | orchestrator | 2026-03-29 04:20:39.141099 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:20:39.141103 | orchestrator | Sunday 29 March 2026 04:20:38 +0000 (0:00:01.069) 0:00:49.644 ********** 2026-03-29 04:20:39.141118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:39.141124 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:39.141131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:39.141135 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:20:39.141140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:39.141144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:39.141152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:39.141156 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:20:39.141160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:39.141168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801407 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:20:41.801440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:41.801466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801602 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:20:41.801629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:41.801651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801690 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-29 04:20:41.801711 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-29 04:20:41.801750 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:20:41.801797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:41.801813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801856 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:20:41.801870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:20:41.801883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:20:41.801912 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:20:41.801925 | orchestrator | 2026-03-29 04:20:41.801938 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.801951 | orchestrator | Sunday 29 March 2026 04:20:40 +0000 (0:00:02.282) 0:00:51.927 ********** 2026-03-29 04:20:41.801963 | orchestrator | 2026-03-29 04:20:41.801976 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.801989 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.084) 0:00:52.011 ********** 2026-03-29 04:20:41.802001 | orchestrator | 2026-03-29 04:20:41.802106 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.802121 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.075) 0:00:52.087 ********** 2026-03-29 04:20:41.802132 | orchestrator | 2026-03-29 04:20:41.802143 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.802153 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.074) 0:00:52.162 ********** 2026-03-29 04:20:41.802162 | orchestrator | 2026-03-29 04:20:41.802172 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.802181 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.337) 0:00:52.500 ********** 2026-03-29 04:20:41.802191 | orchestrator | 2026-03-29 04:20:41.802200 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:41.802220 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.074) 0:00:52.574 ********** 2026-03-29 04:20:44.590280 | orchestrator | 2026-03-29 04:20:44.590367 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:20:44.590378 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.076) 0:00:52.651 ********** 2026-03-29 04:20:44.590386 | orchestrator | 2026-03-29 04:20:44.590393 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-29 04:20:44.590401 | orchestrator | Sunday 29 March 2026 04:20:41 +0000 (0:00:00.109) 0:00:52.760 ********** 2026-03-29 04:20:44.590408 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-03-29 04:20:44.590417 | orchestrator | (): '535e478a-7128-f4ec-09bd-00000000000f' 2026-03-29 04:20:44.590472 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_79ytb_st/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_79ytb_st/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_79ytb_st/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:44.590498 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5zk98xsl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5zk98xsl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5zk98xsl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:44.590516 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3lpyvlrh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3lpyvlrh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3lpyvlrh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:44.590534 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_43ud_fpa/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_43ud_fpa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_43ud_fpa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:46.141073 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4wdh9r8w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4wdh9r8w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4wdh9r8w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:46.141184 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vofueunk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vofueunk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vofueunk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:46.141238 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1zmmv1lb/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1zmmv1lb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1zmmv1lb/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-29 04:20:46.141256 | orchestrator | 2026-03-29 04:20:46.141286 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:20:46.141303 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141318 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141332 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141344 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141364 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141378 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141390 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-29 04:20:46.141404 | orchestrator | 2026-03-29 04:20:46.141419 | orchestrator | 2026-03-29 04:20:46.141475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:20:46.709716 | orchestrator | 2026-03-29 04:20:46 | INFO  | Task fa3adb38-5bcb-4f60-88e4-2dab3a7155ca (common) was prepared for execution. 2026-03-29 04:20:46.709819 | orchestrator | 2026-03-29 04:20:46 | INFO  | It takes a moment until task fa3adb38-5bcb-4f60-88e4-2dab3a7155ca (common) has been started and output is visible here. 2026-03-29 04:21:05.336004 | orchestrator | Sunday 29 March 2026 04:20:46 +0000 (0:00:04.348) 0:00:57.109 ********** 2026-03-29 04:21:05.336076 | orchestrator | =============================================================================== 2026-03-29 04:21:05.336082 | orchestrator | common : Restart fluentd container -------------------------------------- 4.35s 2026-03-29 04:21:05.336087 | orchestrator | common : Copying over config.json files for services -------------------- 4.21s 2026-03-29 04:21:05.336092 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.06s 2026-03-29 04:21:05.336096 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.36s 2026-03-29 04:21:05.336099 | orchestrator | service-check-containers : common | Check containers -------------------- 3.35s 2026-03-29 04:21:05.336103 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.10s 2026-03-29 04:21:05.336107 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.72s 2026-03-29 04:21:05.336111 | orchestrator | common : include_tasks -------------------------------------------------- 2.40s 2026-03-29 04:21:05.336115 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.28s 2026-03-29 04:21:05.336119 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.27s 2026-03-29 04:21:05.336122 | orchestrator | common : include_tasks -------------------------------------------------- 2.20s 2026-03-29 04:21:05.336126 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.18s 2026-03-29 04:21:05.336130 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.16s 2026-03-29 04:21:05.336134 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.10s 2026-03-29 04:21:05.336137 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.09s 2026-03-29 04:21:05.336141 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.89s 2026-03-29 04:21:05.336145 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.77s 2026-03-29 04:21:05.336164 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.25s 2026-03-29 04:21:05.336168 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.08s 2026-03-29 04:21:05.336172 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 1.07s 2026-03-29 04:21:05.336176 | orchestrator | 2026-03-29 04:21:05.336180 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-29 04:21:05.336184 | orchestrator | 2026-03-29 04:21:05.336188 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 04:21:05.336191 | orchestrator | Sunday 29 March 2026 04:20:53 +0000 (0:00:02.150) 0:00:02.150 ********** 2026-03-29 04:21:05.336195 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:21:05.336200 | orchestrator | 2026-03-29 04:21:05.336204 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-29 04:21:05.336208 | orchestrator | Sunday 29 March 2026 04:20:56 +0000 (0:00:03.308) 0:00:05.458 ********** 2026-03-29 04:21:05.336213 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336217 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336221 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336225 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336229 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336233 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336237 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336240 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336244 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 04:21:05.336248 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336252 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336265 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336269 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336273 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336277 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336281 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 04:21:05.336285 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336288 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336292 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336296 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336308 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 04:21:05.336313 | orchestrator | 2026-03-29 04:21:05.336317 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 04:21:05.336320 | orchestrator | Sunday 29 March 2026 04:20:59 +0000 (0:00:03.392) 0:00:08.851 ********** 2026-03-29 04:21:05.336324 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:21:05.336329 | orchestrator | 2026-03-29 04:21:05.336336 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-29 04:21:05.336340 | orchestrator | Sunday 29 March 2026 04:21:02 +0000 (0:00:02.888) 0:00:11.739 ********** 2026-03-29 04:21:05.336345 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336356 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336364 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336371 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:05.336378 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:07.777692 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777805 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777830 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777840 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777864 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777897 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777927 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777936 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777945 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777959 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:07.777975 | orchestrator | 2026-03-29 04:21:07.777991 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-29 04:21:07.778005 | orchestrator | Sunday 29 March 2026 04:21:07 +0000 (0:00:04.444) 0:00:16.184 ********** 2026-03-29 04:21:07.778091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:07.778124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:10.032061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032170 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:10.032201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032226 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:21:10.032265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:10.032317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032325 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:21:10.032332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:10.032350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032429 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:21:10.032440 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:21:10.032451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:10.032462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:10.032473 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:21:10.032492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:11.128106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128256 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:21:11.128266 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:21:11.128274 | orchestrator | 2026-03-29 04:21:11.128283 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-29 04:21:11.128292 | orchestrator | Sunday 29 March 2026 04:21:10 +0000 (0:00:02.790) 0:00:18.974 ********** 2026-03-29 04:21:11.128300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:11.128310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:11.128333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:11.128397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:11.128414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:11.128436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:24.793328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793440 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:21:24.793450 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:21:24.793457 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:21:24.793465 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:21:24.793474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:24.793522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793538 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:21:24.793591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793600 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:21:24.793623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:21:24.793631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:24.793655 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:21:24.793663 | orchestrator | 2026-03-29 04:21:24.793671 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-29 04:21:24.793689 | orchestrator | Sunday 29 March 2026 04:21:13 +0000 (0:00:02.998) 0:00:21.972 ********** 2026-03-29 04:21:24.793696 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:21:24.793704 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:21:24.793719 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:21:24.793727 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:21:24.793734 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:21:24.793741 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:21:24.793748 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:21:24.793755 | orchestrator | 2026-03-29 04:21:24.793763 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-29 04:21:24.793771 | orchestrator | Sunday 29 March 2026 04:21:15 +0000 (0:00:02.088) 0:00:24.061 ********** 2026-03-29 04:21:24.793778 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:21:24.793790 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:21:24.793797 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:21:24.793804 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:21:24.793811 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:21:24.793818 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:21:24.793825 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:21:24.793832 | orchestrator | 2026-03-29 04:21:24.793840 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-29 04:21:24.793847 | orchestrator | Sunday 29 March 2026 04:21:17 +0000 (0:00:02.052) 0:00:26.114 ********** 2026-03-29 04:21:24.793854 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:21:24.793861 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:21:24.793868 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:21:24.793877 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:21:24.793885 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:21:24.793894 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:21:24.793902 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:21:24.793910 | orchestrator | 2026-03-29 04:21:24.793918 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-29 04:21:24.793927 | orchestrator | Sunday 29 March 2026 04:21:19 +0000 (0:00:02.015) 0:00:28.130 ********** 2026-03-29 04:21:24.793934 | orchestrator | ok: [testbed-manager] 2026-03-29 04:21:24.793944 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:21:24.793952 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:21:24.793960 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:21:24.793968 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:21:24.793976 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:21:24.793984 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:21:24.793992 | orchestrator | 2026-03-29 04:21:24.794000 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-29 04:21:24.794008 | orchestrator | Sunday 29 March 2026 04:21:21 +0000 (0:00:02.798) 0:00:30.929 ********** 2026-03-29 04:21:24.794065 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:24.794091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483450 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483499 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:27.483582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483630 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483643 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483656 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483669 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483700 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:27.483721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790390 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790412 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790419 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790424 | orchestrator | 2026-03-29 04:21:46.790430 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-29 04:21:46.790436 | orchestrator | Sunday 29 March 2026 04:21:27 +0000 (0:00:05.501) 0:00:36.431 ********** 2026-03-29 04:21:46.790440 | orchestrator | [WARNING]: Skipped 2026-03-29 04:21:46.790446 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-29 04:21:46.790452 | orchestrator | to this access issue: 2026-03-29 04:21:46.790456 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-29 04:21:46.790461 | orchestrator | directory 2026-03-29 04:21:46.790465 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:21:46.790470 | orchestrator | 2026-03-29 04:21:46.790474 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-29 04:21:46.790492 | orchestrator | Sunday 29 March 2026 04:21:29 +0000 (0:00:02.393) 0:00:38.824 ********** 2026-03-29 04:21:46.790496 | orchestrator | [WARNING]: Skipped 2026-03-29 04:21:46.790500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-29 04:21:46.790504 | orchestrator | to this access issue: 2026-03-29 04:21:46.790508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-29 04:21:46.790512 | orchestrator | directory 2026-03-29 04:21:46.790517 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:21:46.790521 | orchestrator | 2026-03-29 04:21:46.790525 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-29 04:21:46.790529 | orchestrator | Sunday 29 March 2026 04:21:31 +0000 (0:00:01.902) 0:00:40.726 ********** 2026-03-29 04:21:46.790533 | orchestrator | [WARNING]: Skipped 2026-03-29 04:21:46.790537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-29 04:21:46.790541 | orchestrator | to this access issue: 2026-03-29 04:21:46.790545 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-29 04:21:46.790569 | orchestrator | directory 2026-03-29 04:21:46.790573 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:21:46.790577 | orchestrator | 2026-03-29 04:21:46.790581 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-29 04:21:46.790585 | orchestrator | Sunday 29 March 2026 04:21:33 +0000 (0:00:01.879) 0:00:42.606 ********** 2026-03-29 04:21:46.790589 | orchestrator | [WARNING]: Skipped 2026-03-29 04:21:46.790594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-29 04:21:46.790598 | orchestrator | to this access issue: 2026-03-29 04:21:46.790602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-29 04:21:46.790606 | orchestrator | directory 2026-03-29 04:21:46.790610 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 04:21:46.790614 | orchestrator | 2026-03-29 04:21:46.790618 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-29 04:21:46.790622 | orchestrator | Sunday 29 March 2026 04:21:35 +0000 (0:00:01.926) 0:00:44.532 ********** 2026-03-29 04:21:46.790626 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:21:46.790630 | orchestrator | ok: [testbed-manager] 2026-03-29 04:21:46.790635 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:21:46.790639 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:21:46.790643 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:21:46.790647 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:21:46.790651 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:21:46.790655 | orchestrator | 2026-03-29 04:21:46.790670 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-29 04:21:46.790674 | orchestrator | Sunday 29 March 2026 04:21:39 +0000 (0:00:03.973) 0:00:48.506 ********** 2026-03-29 04:21:46.790679 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790684 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790688 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790692 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790696 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790700 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790705 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 04:21:46.790709 | orchestrator | 2026-03-29 04:21:46.790713 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-29 04:21:46.790717 | orchestrator | Sunday 29 March 2026 04:21:42 +0000 (0:00:03.248) 0:00:51.754 ********** 2026-03-29 04:21:46.790725 | orchestrator | ok: [testbed-manager] 2026-03-29 04:21:46.790729 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:21:46.790734 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:21:46.790738 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:21:46.790742 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:21:46.790746 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:21:46.790750 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:21:46.790754 | orchestrator | 2026-03-29 04:21:46.790758 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-29 04:21:46.790762 | orchestrator | Sunday 29 March 2026 04:21:45 +0000 (0:00:02.968) 0:00:54.723 ********** 2026-03-29 04:21:46.790771 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:46.790777 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:46.790782 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:46.790787 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:46.790794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.953912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:48.954105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.954139 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:48.954152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.954165 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:48.954179 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:48.954192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.954222 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:48.954244 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:48.954255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.954272 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:48.954284 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:48.954295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:21:48.954307 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:48.954318 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:48.954336 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:57.730133 | orchestrator | 2026-03-29 04:21:57.730211 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-29 04:21:57.730220 | orchestrator | Sunday 29 March 2026 04:21:48 +0000 (0:00:03.173) 0:00:57.896 ********** 2026-03-29 04:21:57.730225 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730231 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730236 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730241 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730245 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730250 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730255 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 04:21:57.730259 | orchestrator | 2026-03-29 04:21:57.730264 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-29 04:21:57.730268 | orchestrator | Sunday 29 March 2026 04:21:52 +0000 (0:00:03.220) 0:01:01.117 ********** 2026-03-29 04:21:57.730273 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730278 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730282 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730298 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730302 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730307 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730311 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 04:21:57.730316 | orchestrator | 2026-03-29 04:21:57.730321 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-29 04:21:57.730325 | orchestrator | Sunday 29 March 2026 04:21:55 +0000 (0:00:03.136) 0:01:04.253 ********** 2026-03-29 04:21:57.730331 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730391 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:57.730399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 04:21:57.730409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:57.730414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:57.730423 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:21:57.730433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.417998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.418089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.418106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:22:02.418118 | orchestrator | 2026-03-29 04:22:02.418130 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-29 04:22:02.418143 | orchestrator | Sunday 29 March 2026 04:21:59 +0000 (0:00:04.512) 0:01:08.766 ********** 2026-03-29 04:22:02.418155 | orchestrator | changed: [testbed-manager] => { 2026-03-29 04:22:02.418167 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418178 | orchestrator | } 2026-03-29 04:22:02.418188 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:22:02.418199 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418210 | orchestrator | } 2026-03-29 04:22:02.418221 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:22:02.418231 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418242 | orchestrator | } 2026-03-29 04:22:02.418253 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:22:02.418264 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418275 | orchestrator | } 2026-03-29 04:22:02.418286 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 04:22:02.418296 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418307 | orchestrator | } 2026-03-29 04:22:02.418318 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 04:22:02.418330 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418346 | orchestrator | } 2026-03-29 04:22:02.418357 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 04:22:02.418367 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:22:02.418378 | orchestrator | } 2026-03-29 04:22:02.418389 | orchestrator | 2026-03-29 04:22:02.418400 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:22:02.418420 | orchestrator | Sunday 29 March 2026 04:22:01 +0000 (0:00:02.183) 0:01:10.949 ********** 2026-03-29 04:22:02.418433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:02.418445 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:02.418458 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:02.418470 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:22:02.418489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:02.418510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:09.042546 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:22:09.042600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042617 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:22:09.042624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:09.042632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042647 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:22:09.042671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:09.042683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042705 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:22:09.042712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:09.042720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:22:09.042734 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:22:09.042741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 04:22:09.042754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:23:36.725322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:23:36.725461 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:23:36.725479 | orchestrator | 2026-03-29 04:23:36.725492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725520 | orchestrator | Sunday 29 March 2026 04:22:05 +0000 (0:00:03.123) 0:01:14.073 ********** 2026-03-29 04:23:36.725531 | orchestrator | 2026-03-29 04:23:36.725548 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725608 | orchestrator | Sunday 29 March 2026 04:22:05 +0000 (0:00:00.452) 0:01:14.526 ********** 2026-03-29 04:23:36.725627 | orchestrator | 2026-03-29 04:23:36.725643 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725661 | orchestrator | Sunday 29 March 2026 04:22:05 +0000 (0:00:00.435) 0:01:14.961 ********** 2026-03-29 04:23:36.725679 | orchestrator | 2026-03-29 04:23:36.725696 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725714 | orchestrator | Sunday 29 March 2026 04:22:06 +0000 (0:00:00.493) 0:01:15.454 ********** 2026-03-29 04:23:36.725733 | orchestrator | 2026-03-29 04:23:36.725752 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725771 | orchestrator | Sunday 29 March 2026 04:22:07 +0000 (0:00:00.740) 0:01:16.195 ********** 2026-03-29 04:23:36.725788 | orchestrator | 2026-03-29 04:23:36.725808 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725826 | orchestrator | Sunday 29 March 2026 04:22:07 +0000 (0:00:00.470) 0:01:16.665 ********** 2026-03-29 04:23:36.725844 | orchestrator | 2026-03-29 04:23:36.725863 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 04:23:36.725882 | orchestrator | Sunday 29 March 2026 04:22:08 +0000 (0:00:00.460) 0:01:17.126 ********** 2026-03-29 04:23:36.725902 | orchestrator | 2026-03-29 04:23:36.725921 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-29 04:23:36.725941 | orchestrator | Sunday 29 March 2026 04:22:09 +0000 (0:00:00.858) 0:01:17.985 ********** 2026-03-29 04:23:36.725956 | orchestrator | changed: [testbed-manager] 2026-03-29 04:23:36.725969 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:23:36.725981 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:23:36.725994 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:23:36.726007 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:23:36.726080 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:23:36.726093 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:23:36.726106 | orchestrator | 2026-03-29 04:23:36.726119 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-29 04:23:36.726142 | orchestrator | Sunday 29 March 2026 04:22:45 +0000 (0:00:36.048) 0:01:54.033 ********** 2026-03-29 04:23:36.726155 | orchestrator | changed: [testbed-manager] 2026-03-29 04:23:36.726169 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:23:36.726181 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:23:36.726192 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:23:36.726203 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:23:36.726213 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:23:36.726224 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:23:36.726235 | orchestrator | 2026-03-29 04:23:36.726246 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-29 04:23:36.726256 | orchestrator | Sunday 29 March 2026 04:23:20 +0000 (0:00:35.808) 0:02:29.842 ********** 2026-03-29 04:23:36.726267 | orchestrator | ok: [testbed-manager] 2026-03-29 04:23:36.726279 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:23:36.726290 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:23:36.726301 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:23:36.726311 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:23:36.726322 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:23:36.726347 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:23:36.726358 | orchestrator | 2026-03-29 04:23:36.726369 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-29 04:23:36.726380 | orchestrator | Sunday 29 March 2026 04:23:23 +0000 (0:00:03.072) 0:02:32.915 ********** 2026-03-29 04:23:36.726390 | orchestrator | changed: [testbed-manager] 2026-03-29 04:23:36.726401 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:23:36.726412 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:23:36.726423 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:23:36.726433 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:23:36.726445 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:23:36.726456 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:23:36.726466 | orchestrator | 2026-03-29 04:23:36.726477 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:23:36.726490 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726503 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726514 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726524 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726585 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726608 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726626 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:23:36.726644 | orchestrator | 2026-03-29 04:23:36.726662 | orchestrator | 2026-03-29 04:23:36.726679 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:23:36.726697 | orchestrator | Sunday 29 March 2026 04:23:36 +0000 (0:00:12.252) 0:02:45.167 ********** 2026-03-29 04:23:36.726723 | orchestrator | =============================================================================== 2026-03-29 04:23:36.726740 | orchestrator | common : Restart fluentd container ------------------------------------- 36.05s 2026-03-29 04:23:36.726758 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.81s 2026-03-29 04:23:36.726775 | orchestrator | common : Restart cron container ---------------------------------------- 12.25s 2026-03-29 04:23:36.726794 | orchestrator | common : Copying over config.json files for services -------------------- 5.50s 2026-03-29 04:23:36.726813 | orchestrator | service-check-containers : common | Check containers -------------------- 4.51s 2026-03-29 04:23:36.726833 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.44s 2026-03-29 04:23:36.726851 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.97s 2026-03-29 04:23:36.726868 | orchestrator | common : Flush handlers ------------------------------------------------- 3.91s 2026-03-29 04:23:36.726885 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.39s 2026-03-29 04:23:36.726896 | orchestrator | common : include_tasks -------------------------------------------------- 3.31s 2026-03-29 04:23:36.726907 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.25s 2026-03-29 04:23:36.726918 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.22s 2026-03-29 04:23:36.726928 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.17s 2026-03-29 04:23:36.726939 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.14s 2026-03-29 04:23:36.726960 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.12s 2026-03-29 04:23:36.726971 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.07s 2026-03-29 04:23:36.726982 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.00s 2026-03-29 04:23:36.726993 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.97s 2026-03-29 04:23:36.727004 | orchestrator | common : include_tasks -------------------------------------------------- 2.89s 2026-03-29 04:23:36.727014 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.80s 2026-03-29 04:23:37.039983 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-29 04:23:39.270966 | orchestrator | 2026-03-29 04:23:39 | INFO  | Task d6215a89-6507-4475-b8f4-196ce49bd8b9 (loadbalancer) was prepared for execution. 2026-03-29 04:23:39.271075 | orchestrator | 2026-03-29 04:23:39 | INFO  | It takes a moment until task d6215a89-6507-4475-b8f4-196ce49bd8b9 (loadbalancer) has been started and output is visible here. 2026-03-29 04:24:14.998752 | orchestrator | 2026-03-29 04:24:14.998863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:24:14.998878 | orchestrator | 2026-03-29 04:24:14.998889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:24:14.998899 | orchestrator | Sunday 29 March 2026 04:23:46 +0000 (0:00:01.678) 0:00:01.678 ********** 2026-03-29 04:24:14.998909 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:14.998920 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:14.998930 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:14.998940 | orchestrator | 2026-03-29 04:24:14.998950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:24:14.998960 | orchestrator | Sunday 29 March 2026 04:23:47 +0000 (0:00:01.760) 0:00:03.439 ********** 2026-03-29 04:24:14.998970 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-29 04:24:14.998980 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-29 04:24:14.998990 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-29 04:24:14.999000 | orchestrator | 2026-03-29 04:24:14.999010 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-29 04:24:14.999020 | orchestrator | 2026-03-29 04:24:14.999029 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 04:24:14.999039 | orchestrator | Sunday 29 March 2026 04:23:49 +0000 (0:00:01.892) 0:00:05.331 ********** 2026-03-29 04:24:14.999049 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:24:14.999059 | orchestrator | 2026-03-29 04:24:14.999069 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-29 04:24:14.999079 | orchestrator | Sunday 29 March 2026 04:23:52 +0000 (0:00:02.970) 0:00:08.302 ********** 2026-03-29 04:24:14.999089 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:14.999098 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:14.999108 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:14.999118 | orchestrator | 2026-03-29 04:24:14.999127 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-29 04:24:14.999137 | orchestrator | Sunday 29 March 2026 04:23:54 +0000 (0:00:02.206) 0:00:10.508 ********** 2026-03-29 04:24:14.999147 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:14.999157 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:14.999167 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:14.999176 | orchestrator | 2026-03-29 04:24:14.999186 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-29 04:24:14.999196 | orchestrator | Sunday 29 March 2026 04:23:57 +0000 (0:00:02.240) 0:00:12.749 ********** 2026-03-29 04:24:14.999205 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:14.999215 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:14.999249 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:14.999260 | orchestrator | 2026-03-29 04:24:14.999269 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 04:24:14.999279 | orchestrator | Sunday 29 March 2026 04:23:58 +0000 (0:00:01.812) 0:00:14.562 ********** 2026-03-29 04:24:14.999302 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:24:14.999329 | orchestrator | 2026-03-29 04:24:14.999351 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-29 04:24:14.999363 | orchestrator | Sunday 29 March 2026 04:24:01 +0000 (0:00:02.076) 0:00:16.639 ********** 2026-03-29 04:24:14.999375 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:14.999387 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:14.999398 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:14.999409 | orchestrator | 2026-03-29 04:24:14.999420 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-29 04:24:14.999431 | orchestrator | Sunday 29 March 2026 04:24:02 +0000 (0:00:01.709) 0:00:18.348 ********** 2026-03-29 04:24:14.999443 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999454 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999465 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999475 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999484 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999494 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 04:24:14.999503 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 04:24:14.999514 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 04:24:14.999524 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 04:24:14.999533 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 04:24:14.999543 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 04:24:14.999552 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 04:24:14.999586 | orchestrator | 2026-03-29 04:24:14.999604 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 04:24:14.999614 | orchestrator | Sunday 29 March 2026 04:24:06 +0000 (0:00:03.391) 0:00:21.740 ********** 2026-03-29 04:24:14.999624 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-29 04:24:14.999634 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-29 04:24:14.999644 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-29 04:24:14.999653 | orchestrator | 2026-03-29 04:24:14.999663 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 04:24:14.999688 | orchestrator | Sunday 29 March 2026 04:24:08 +0000 (0:00:01.961) 0:00:23.701 ********** 2026-03-29 04:24:14.999698 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-29 04:24:14.999708 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-29 04:24:14.999718 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-29 04:24:14.999727 | orchestrator | 2026-03-29 04:24:14.999737 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 04:24:14.999747 | orchestrator | Sunday 29 March 2026 04:24:10 +0000 (0:00:02.274) 0:00:25.976 ********** 2026-03-29 04:24:14.999756 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-29 04:24:14.999766 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:24:14.999776 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-29 04:24:14.999785 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:24:14.999795 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-29 04:24:14.999813 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:24:14.999823 | orchestrator | 2026-03-29 04:24:14.999832 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-29 04:24:14.999842 | orchestrator | Sunday 29 March 2026 04:24:12 +0000 (0:00:01.882) 0:00:27.859 ********** 2026-03-29 04:24:14.999855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:14.999875 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:14.999886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:14.999899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:14.999916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:14.999942 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:26.233261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:26.234311 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:26.234363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:26.234373 | orchestrator | 2026-03-29 04:24:26.234384 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-29 04:24:26.234393 | orchestrator | Sunday 29 March 2026 04:24:14 +0000 (0:00:02.720) 0:00:30.579 ********** 2026-03-29 04:24:26.234401 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:26.234411 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:26.234418 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:26.234426 | orchestrator | 2026-03-29 04:24:26.234434 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-29 04:24:26.234442 | orchestrator | Sunday 29 March 2026 04:24:17 +0000 (0:00:02.027) 0:00:32.607 ********** 2026-03-29 04:24:26.234450 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-29 04:24:26.234459 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-29 04:24:26.234467 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-29 04:24:26.234474 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-29 04:24:26.234482 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-29 04:24:26.234490 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-29 04:24:26.234498 | orchestrator | 2026-03-29 04:24:26.234506 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-29 04:24:26.234514 | orchestrator | Sunday 29 March 2026 04:24:19 +0000 (0:00:02.903) 0:00:35.510 ********** 2026-03-29 04:24:26.234522 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:26.234530 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:26.234537 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:26.234545 | orchestrator | 2026-03-29 04:24:26.234553 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-29 04:24:26.234582 | orchestrator | Sunday 29 March 2026 04:24:22 +0000 (0:00:02.288) 0:00:37.799 ********** 2026-03-29 04:24:26.234591 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:24:26.234599 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:24:26.234606 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:24:26.234614 | orchestrator | 2026-03-29 04:24:26.234622 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-29 04:24:26.234650 | orchestrator | Sunday 29 March 2026 04:24:24 +0000 (0:00:02.297) 0:00:40.096 ********** 2026-03-29 04:24:26.234660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 04:24:26.234690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:24:26.234699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:26.234709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:26.234718 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:24:26.234732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 04:24:26.234741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:24:26.234762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:26.234770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:26.234778 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:24:26.234792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 04:24:30.598798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:24:30.598889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:30.598904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:30.598940 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:24:30.598952 | orchestrator | 2026-03-29 04:24:30.598962 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-29 04:24:30.598972 | orchestrator | Sunday 29 March 2026 04:24:26 +0000 (0:00:01.718) 0:00:41.815 ********** 2026-03-29 04:24:30.598995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599005 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599014 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:30.599062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:30.599078 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:30.599096 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:30.599120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:44.568362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:24:44.568518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37', '__omit_place_holder__c8b040179e503f99afc03bb427787eddb511ed37'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 04:24:44.568647 | orchestrator | 2026-03-29 04:24:44.568684 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-29 04:24:44.568709 | orchestrator | Sunday 29 March 2026 04:24:30 +0000 (0:00:04.367) 0:00:46.183 ********** 2026-03-29 04:24:44.568731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:24:44.568916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:44.568939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:44.568961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:24:44.568982 | orchestrator | 2026-03-29 04:24:44.569003 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-29 04:24:44.569024 | orchestrator | Sunday 29 March 2026 04:24:35 +0000 (0:00:04.943) 0:00:51.127 ********** 2026-03-29 04:24:44.569045 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 04:24:44.569067 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 04:24:44.569087 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 04:24:44.569109 | orchestrator | 2026-03-29 04:24:44.569129 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-29 04:24:44.569150 | orchestrator | Sunday 29 March 2026 04:24:38 +0000 (0:00:02.836) 0:00:53.963 ********** 2026-03-29 04:24:44.569169 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 04:24:44.569188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 04:24:44.569208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 04:24:44.569228 | orchestrator | 2026-03-29 04:24:44.569247 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-29 04:24:44.569266 | orchestrator | Sunday 29 March 2026 04:24:42 +0000 (0:00:04.303) 0:00:58.267 ********** 2026-03-29 04:24:44.569285 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:24:44.569306 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:24:44.569337 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:05.897077 | orchestrator | 2026-03-29 04:25:05.897172 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-29 04:25:05.897206 | orchestrator | Sunday 29 March 2026 04:24:44 +0000 (0:00:01.879) 0:01:00.146 ********** 2026-03-29 04:25:05.897214 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 04:25:05.897221 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 04:25:05.897239 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 04:25:05.897246 | orchestrator | 2026-03-29 04:25:05.897252 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-29 04:25:05.897258 | orchestrator | Sunday 29 March 2026 04:24:47 +0000 (0:00:03.141) 0:01:03.288 ********** 2026-03-29 04:25:05.897264 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 04:25:05.897273 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 04:25:05.897279 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 04:25:05.897285 | orchestrator | 2026-03-29 04:25:05.897291 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 04:25:05.897298 | orchestrator | Sunday 29 March 2026 04:24:50 +0000 (0:00:02.840) 0:01:06.129 ********** 2026-03-29 04:25:05.897305 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:25:05.897312 | orchestrator | 2026-03-29 04:25:05.897319 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-29 04:25:05.897326 | orchestrator | Sunday 29 March 2026 04:24:52 +0000 (0:00:01.938) 0:01:08.067 ********** 2026-03-29 04:25:05.897334 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-29 04:25:05.897341 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-29 04:25:05.897348 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-29 04:25:05.897355 | orchestrator | 2026-03-29 04:25:05.897362 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-29 04:25:05.897368 | orchestrator | Sunday 29 March 2026 04:24:55 +0000 (0:00:02.637) 0:01:10.705 ********** 2026-03-29 04:25:05.897375 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-29 04:25:05.897383 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-29 04:25:05.897390 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-29 04:25:05.897397 | orchestrator | 2026-03-29 04:25:05.897403 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-29 04:25:05.897410 | orchestrator | Sunday 29 March 2026 04:24:57 +0000 (0:00:02.682) 0:01:13.388 ********** 2026-03-29 04:25:05.897417 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:05.897425 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:05.897432 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:05.897438 | orchestrator | 2026-03-29 04:25:05.897445 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-29 04:25:05.897450 | orchestrator | Sunday 29 March 2026 04:24:59 +0000 (0:00:01.617) 0:01:15.005 ********** 2026-03-29 04:25:05.897456 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:05.897462 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:05.897468 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:05.897474 | orchestrator | 2026-03-29 04:25:05.897481 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 04:25:05.897487 | orchestrator | Sunday 29 March 2026 04:25:01 +0000 (0:00:01.992) 0:01:16.998 ********** 2026-03-29 04:25:05.897497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897513 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897553 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897581 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:05.897590 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:05.897603 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:05.897616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:09.772956 | orchestrator | 2026-03-29 04:25:09.773048 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 04:25:09.773063 | orchestrator | Sunday 29 March 2026 04:25:05 +0000 (0:00:04.470) 0:01:21.468 ********** 2026-03-29 04:25:09.773106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 04:25:09.773139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:09.773159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:09.773179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:09.773201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 04:25:09.773249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:09.773271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:09.773290 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:09.773333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 04:25:09.773363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:09.773384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:09.773403 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:09.773423 | orchestrator | 2026-03-29 04:25:09.773442 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 04:25:09.773463 | orchestrator | Sunday 29 March 2026 04:25:07 +0000 (0:00:01.735) 0:01:23.203 ********** 2026-03-29 04:25:09.773484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 04:25:09.773523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:09.773548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:09.773597 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:09.773633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 04:25:21.570444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:21.570670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:21.570698 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:21.570713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 04:25:21.570754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:21.570767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:21.570779 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:21.570791 | orchestrator | 2026-03-29 04:25:21.570804 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-29 04:25:21.570828 | orchestrator | Sunday 29 March 2026 04:25:09 +0000 (0:00:02.152) 0:01:25.356 ********** 2026-03-29 04:25:21.570856 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 04:25:21.570876 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 04:25:21.570895 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 04:25:21.570913 | orchestrator | 2026-03-29 04:25:21.570930 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-29 04:25:21.570949 | orchestrator | Sunday 29 March 2026 04:25:12 +0000 (0:00:02.585) 0:01:27.942 ********** 2026-03-29 04:25:21.570967 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 04:25:21.570988 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 04:25:21.571008 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 04:25:21.571029 | orchestrator | 2026-03-29 04:25:21.571073 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-29 04:25:21.571093 | orchestrator | Sunday 29 March 2026 04:25:14 +0000 (0:00:02.547) 0:01:30.489 ********** 2026-03-29 04:25:21.571119 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 04:25:21.571133 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 04:25:21.571145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 04:25:21.571163 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 04:25:21.571189 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:21.571213 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 04:25:21.571232 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:21.571249 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 04:25:21.571281 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:21.571299 | orchestrator | 2026-03-29 04:25:21.571317 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-29 04:25:21.571334 | orchestrator | Sunday 29 March 2026 04:25:17 +0000 (0:00:02.587) 0:01:33.076 ********** 2026-03-29 04:25:21.571354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:21.571374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:21.571392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:25:21.571411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:21.571446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:25.372271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:25:25.372391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:25.372403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:25.372411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:25:25.372418 | orchestrator | 2026-03-29 04:25:25.372426 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-29 04:25:25.372434 | orchestrator | Sunday 29 March 2026 04:25:21 +0000 (0:00:04.074) 0:01:37.151 ********** 2026-03-29 04:25:25.372442 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:25:25.372449 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:25:25.372456 | orchestrator | } 2026-03-29 04:25:25.372462 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:25:25.372468 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:25:25.372475 | orchestrator | } 2026-03-29 04:25:25.372481 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:25:25.372487 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:25:25.372493 | orchestrator | } 2026-03-29 04:25:25.372500 | orchestrator | 2026-03-29 04:25:25.372506 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:25:25.372512 | orchestrator | Sunday 29 March 2026 04:25:23 +0000 (0:00:01.505) 0:01:38.656 ********** 2026-03-29 04:25:25.372520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 04:25:25.372543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:25.372555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:25.372609 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:25.372618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 04:25:25.372625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:25.372632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:25.372638 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:25.372644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 04:25:25.372651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:25:25.372673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:25:31.352854 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:31.352945 | orchestrator | 2026-03-29 04:25:31.352956 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-29 04:25:31.352966 | orchestrator | Sunday 29 March 2026 04:25:25 +0000 (0:00:02.295) 0:01:40.952 ********** 2026-03-29 04:25:31.352974 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:25:31.352982 | orchestrator | 2026-03-29 04:25:31.352991 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-29 04:25:31.352999 | orchestrator | Sunday 29 March 2026 04:25:27 +0000 (0:00:02.103) 0:01:43.055 ********** 2026-03-29 04:25:31.353010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:31.353023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:31.353034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:31.353043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:31.353101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:31.353112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:31.353121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:31.353130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:31.353138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:31.353152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:31.353170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107170 | orchestrator | 2026-03-29 04:25:33.107188 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-29 04:25:33.107202 | orchestrator | Sunday 29 March 2026 04:25:32 +0000 (0:00:04.984) 0:01:48.040 ********** 2026-03-29 04:25:33.107216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:33.107233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:33.107268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107307 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:33.107339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:33.107352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:33.107364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:33.107394 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:33.107406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:33.107423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 04:25:33.107443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:47.970448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 04:25:47.970658 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:47.970692 | orchestrator | 2026-03-29 04:25:47.970715 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-29 04:25:47.970737 | orchestrator | Sunday 29 March 2026 04:25:34 +0000 (0:00:01.731) 0:01:49.771 ********** 2026-03-29 04:25:47.970755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970815 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:47.970827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970850 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:47.970861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:25:47.970883 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:25:47.970894 | orchestrator | 2026-03-29 04:25:47.970906 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-29 04:25:47.970917 | orchestrator | Sunday 29 March 2026 04:25:36 +0000 (0:00:02.205) 0:01:51.977 ********** 2026-03-29 04:25:47.970928 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:25:47.970940 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:25:47.970953 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:25:47.970966 | orchestrator | 2026-03-29 04:25:47.970978 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-29 04:25:47.970991 | orchestrator | Sunday 29 March 2026 04:25:38 +0000 (0:00:02.291) 0:01:54.269 ********** 2026-03-29 04:25:47.971004 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:25:47.971016 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:25:47.971029 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:25:47.971042 | orchestrator | 2026-03-29 04:25:47.971054 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-29 04:25:47.971081 | orchestrator | Sunday 29 March 2026 04:25:41 +0000 (0:00:02.956) 0:01:57.226 ********** 2026-03-29 04:25:47.971092 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:25:47.971103 | orchestrator | 2026-03-29 04:25:47.971114 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-29 04:25:47.971125 | orchestrator | Sunday 29 March 2026 04:25:43 +0000 (0:00:01.665) 0:01:58.891 ********** 2026-03-29 04:25:47.971161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:47.971177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:47.971198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:25:47.971211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:47.971228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:47.971248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:25:49.635607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.635777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.635843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.635857 | orchestrator | 2026-03-29 04:25:49.635870 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-29 04:25:49.635882 | orchestrator | Sunday 29 March 2026 04:25:47 +0000 (0:00:04.661) 0:02:03.553 ********** 2026-03-29 04:25:49.635911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:49.635925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.635957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.635978 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:25:49.635991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:49.636003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.636019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:25:49.636033 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:25:49.636047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:25:49.636077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 04:26:07.356439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:07.356550 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:07.356632 | orchestrator | 2026-03-29 04:26:07.356656 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-29 04:26:07.356673 | orchestrator | Sunday 29 March 2026 04:25:49 +0000 (0:00:01.666) 0:02:05.220 ********** 2026-03-29 04:26:07.356685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356713 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:07.356725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356747 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:07.356758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:07.356782 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:07.356793 | orchestrator | 2026-03-29 04:26:07.356902 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-29 04:26:07.356917 | orchestrator | Sunday 29 March 2026 04:25:51 +0000 (0:00:01.986) 0:02:07.206 ********** 2026-03-29 04:26:07.356929 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:07.356969 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:07.356983 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:07.356995 | orchestrator | 2026-03-29 04:26:07.357009 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-29 04:26:07.357022 | orchestrator | Sunday 29 March 2026 04:25:54 +0000 (0:00:02.447) 0:02:09.653 ********** 2026-03-29 04:26:07.357034 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:07.357047 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:07.357060 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:07.357073 | orchestrator | 2026-03-29 04:26:07.357086 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-29 04:26:07.357099 | orchestrator | Sunday 29 March 2026 04:25:57 +0000 (0:00:03.739) 0:02:13.393 ********** 2026-03-29 04:26:07.357112 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:07.357124 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:07.357137 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:07.357150 | orchestrator | 2026-03-29 04:26:07.357163 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-29 04:26:07.357175 | orchestrator | Sunday 29 March 2026 04:25:59 +0000 (0:00:01.459) 0:02:14.852 ********** 2026-03-29 04:26:07.357188 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:26:07.357201 | orchestrator | 2026-03-29 04:26:07.357214 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-29 04:26:07.357227 | orchestrator | Sunday 29 March 2026 04:26:00 +0000 (0:00:01.690) 0:02:16.542 ********** 2026-03-29 04:26:07.357279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 04:26:07.357301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 04:26:07.357316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 04:26:07.357338 | orchestrator | 2026-03-29 04:26:07.357350 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-29 04:26:07.357367 | orchestrator | Sunday 29 March 2026 04:26:04 +0000 (0:00:03.630) 0:02:20.173 ********** 2026-03-29 04:26:07.357379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 04:26:07.357391 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:07.357402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 04:26:07.357414 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:07.357434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 04:26:20.315244 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:20.315328 | orchestrator | 2026-03-29 04:26:20.315338 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-29 04:26:20.315345 | orchestrator | Sunday 29 March 2026 04:26:07 +0000 (0:00:02.763) 0:02:22.937 ********** 2026-03-29 04:26:20.315353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315390 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:20.315397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315418 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:20.315424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 04:26:20.315436 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:20.315441 | orchestrator | 2026-03-29 04:26:20.315447 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-29 04:26:20.315452 | orchestrator | Sunday 29 March 2026 04:26:10 +0000 (0:00:03.214) 0:02:26.151 ********** 2026-03-29 04:26:20.315458 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:20.315463 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:20.315469 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:20.315474 | orchestrator | 2026-03-29 04:26:20.315480 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-29 04:26:20.315485 | orchestrator | Sunday 29 March 2026 04:26:12 +0000 (0:00:01.568) 0:02:27.720 ********** 2026-03-29 04:26:20.315490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:20.315496 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:20.315502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:20.315507 | orchestrator | 2026-03-29 04:26:20.315512 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-29 04:26:20.315518 | orchestrator | Sunday 29 March 2026 04:26:14 +0000 (0:00:02.505) 0:02:30.226 ********** 2026-03-29 04:26:20.315524 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:26:20.315529 | orchestrator | 2026-03-29 04:26:20.315535 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-29 04:26:20.315540 | orchestrator | Sunday 29 March 2026 04:26:16 +0000 (0:00:01.845) 0:02:32.072 ********** 2026-03-29 04:26:20.315560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:20.315610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:20.315621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:20.315629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:20.315636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:20.315648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.301911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.302111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:22.303073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303141 | orchestrator | 2026-03-29 04:26:22.303147 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-29 04:26:22.303154 | orchestrator | Sunday 29 March 2026 04:26:21 +0000 (0:00:04.950) 0:02:37.022 ********** 2026-03-29 04:26:22.303169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:22.303176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:22.303197 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:22.303227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:34.134291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134409 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:34.134419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:34.134463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 04:26:34.134509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:34.134516 | orchestrator | 2026-03-29 04:26:34.134525 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-29 04:26:34.134532 | orchestrator | Sunday 29 March 2026 04:26:23 +0000 (0:00:01.996) 0:02:39.019 ********** 2026-03-29 04:26:34.134539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134560 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:34.134607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:34.134624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:34.134636 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:34.134642 | orchestrator | 2026-03-29 04:26:34.134648 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-29 04:26:34.134655 | orchestrator | Sunday 29 March 2026 04:26:25 +0000 (0:00:02.051) 0:02:41.071 ********** 2026-03-29 04:26:34.134662 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:34.134671 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:34.134678 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:34.134686 | orchestrator | 2026-03-29 04:26:34.134692 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-29 04:26:34.134699 | orchestrator | Sunday 29 March 2026 04:26:27 +0000 (0:00:02.343) 0:02:43.415 ********** 2026-03-29 04:26:34.134707 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:34.134714 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:34.134721 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:34.134728 | orchestrator | 2026-03-29 04:26:34.134734 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-29 04:26:34.134742 | orchestrator | Sunday 29 March 2026 04:26:30 +0000 (0:00:03.006) 0:02:46.422 ********** 2026-03-29 04:26:34.134749 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:34.134756 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:34.134763 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:34.134770 | orchestrator | 2026-03-29 04:26:34.134777 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-29 04:26:34.134784 | orchestrator | Sunday 29 March 2026 04:26:32 +0000 (0:00:01.651) 0:02:48.074 ********** 2026-03-29 04:26:34.134791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:34.134798 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:34.134812 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:39.623092 | orchestrator | 2026-03-29 04:26:39.623217 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-29 04:26:39.623237 | orchestrator | Sunday 29 March 2026 04:26:34 +0000 (0:00:01.646) 0:02:49.720 ********** 2026-03-29 04:26:39.623266 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:26:39.623288 | orchestrator | 2026-03-29 04:26:39.623300 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-29 04:26:39.623312 | orchestrator | Sunday 29 March 2026 04:26:35 +0000 (0:00:01.815) 0:02:51.536 ********** 2026-03-29 04:26:39.623328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:39.623363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:39.623372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:39.623440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:39.623447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:39.623467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:26:41.631889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:41.631896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:41.631977 | orchestrator | 2026-03-29 04:26:41.631985 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-29 04:26:41.631992 | orchestrator | Sunday 29 March 2026 04:26:40 +0000 (0:00:05.015) 0:02:56.552 ********** 2026-03-29 04:26:41.631999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:41.632010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:41.632034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.940849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.940979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.940996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941021 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:42.941826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:42.941901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:42.941917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:42.941987 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:42.942007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:26:58.099165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 04:26:58.099284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 04:26:58.099302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 04:26:58.099315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 04:26:58.099369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:26:58.099382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 04:26:58.099394 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:58.099408 | orchestrator | 2026-03-29 04:26:58.099420 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-29 04:26:58.099433 | orchestrator | Sunday 29 March 2026 04:26:42 +0000 (0:00:01.972) 0:02:58.525 ********** 2026-03-29 04:26:58.099461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:58.099501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099524 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:58.099536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:26:58.099559 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:58.099664 | orchestrator | 2026-03-29 04:26:58.099682 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-29 04:26:58.099695 | orchestrator | Sunday 29 March 2026 04:26:44 +0000 (0:00:02.025) 0:03:00.551 ********** 2026-03-29 04:26:58.099708 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:58.099733 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:58.099746 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:58.099758 | orchestrator | 2026-03-29 04:26:58.099771 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-29 04:26:58.099784 | orchestrator | Sunday 29 March 2026 04:26:47 +0000 (0:00:02.223) 0:03:02.775 ********** 2026-03-29 04:26:58.099797 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:26:58.099809 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:26:58.099821 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:26:58.099833 | orchestrator | 2026-03-29 04:26:58.099846 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-29 04:26:58.099858 | orchestrator | Sunday 29 March 2026 04:26:50 +0000 (0:00:02.963) 0:03:05.738 ********** 2026-03-29 04:26:58.099871 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:26:58.099884 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:26:58.099896 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:26:58.099909 | orchestrator | 2026-03-29 04:26:58.099921 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-29 04:26:58.099934 | orchestrator | Sunday 29 March 2026 04:26:51 +0000 (0:00:01.410) 0:03:07.150 ********** 2026-03-29 04:26:58.099946 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:26:58.099959 | orchestrator | 2026-03-29 04:26:58.099971 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-29 04:26:58.099984 | orchestrator | Sunday 29 March 2026 04:26:53 +0000 (0:00:01.914) 0:03:09.065 ********** 2026-03-29 04:26:58.100020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 04:26:59.235221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:26:59.236739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 04:26:59.236845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:26:59.236882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 04:26:59.236932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:27:02.587849 | orchestrator | 2026-03-29 04:27:02.587944 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-29 04:27:02.587959 | orchestrator | Sunday 29 March 2026 04:26:59 +0000 (0:00:05.758) 0:03:14.824 ********** 2026-03-29 04:27:02.587992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 04:27:02.588008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:27:02.588079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 04:27:02.588103 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:02.588124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:27:02.588154 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:02.588189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 04:27:20.909224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 04:27:20.909362 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:20.909381 | orchestrator | 2026-03-29 04:27:20.909394 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-29 04:27:20.909406 | orchestrator | Sunday 29 March 2026 04:27:03 +0000 (0:00:04.429) 0:03:19.253 ********** 2026-03-29 04:27:20.909419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:20.909457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909515 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:20.909526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 04:27:20.909549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:20.909559 | orchestrator | 2026-03-29 04:27:20.909678 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-29 04:27:20.909697 | orchestrator | Sunday 29 March 2026 04:27:08 +0000 (0:00:04.537) 0:03:23.791 ********** 2026-03-29 04:27:20.909708 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:20.909720 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:20.909731 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:20.909742 | orchestrator | 2026-03-29 04:27:20.909755 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-29 04:27:20.909768 | orchestrator | Sunday 29 March 2026 04:27:10 +0000 (0:00:02.283) 0:03:26.075 ********** 2026-03-29 04:27:20.909781 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:20.909794 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:20.909806 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:20.909819 | orchestrator | 2026-03-29 04:27:20.909832 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-29 04:27:20.909844 | orchestrator | Sunday 29 March 2026 04:27:13 +0000 (0:00:02.773) 0:03:28.848 ********** 2026-03-29 04:27:20.909856 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:20.909868 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:20.909881 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:20.909893 | orchestrator | 2026-03-29 04:27:20.909905 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-29 04:27:20.909917 | orchestrator | Sunday 29 March 2026 04:27:14 +0000 (0:00:01.371) 0:03:30.219 ********** 2026-03-29 04:27:20.909929 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:27:20.909941 | orchestrator | 2026-03-29 04:27:20.909954 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-29 04:27:20.909967 | orchestrator | Sunday 29 March 2026 04:27:16 +0000 (0:00:01.672) 0:03:31.892 ********** 2026-03-29 04:27:20.909981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:27:20.910013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:27:36.880018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:27:36.880159 | orchestrator | 2026-03-29 04:27:36.880177 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-29 04:27:36.880189 | orchestrator | Sunday 29 March 2026 04:27:20 +0000 (0:00:04.603) 0:03:36.495 ********** 2026-03-29 04:27:36.880202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:27:36.880214 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:36.880227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:27:36.880238 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:36.880250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:27:36.880261 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:36.880272 | orchestrator | 2026-03-29 04:27:36.880283 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-29 04:27:36.880294 | orchestrator | Sunday 29 March 2026 04:27:22 +0000 (0:00:01.609) 0:03:38.104 ********** 2026-03-29 04:27:36.880306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880386 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:36.880398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880409 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:36.880419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:27:36.880442 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:36.880452 | orchestrator | 2026-03-29 04:27:36.880463 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-29 04:27:36.880474 | orchestrator | Sunday 29 March 2026 04:27:23 +0000 (0:00:01.466) 0:03:39.571 ********** 2026-03-29 04:27:36.880484 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:36.880496 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:36.880507 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:36.880517 | orchestrator | 2026-03-29 04:27:36.880530 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-29 04:27:36.880542 | orchestrator | Sunday 29 March 2026 04:27:26 +0000 (0:00:02.276) 0:03:41.847 ********** 2026-03-29 04:27:36.880555 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:36.880568 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:36.880580 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:36.880630 | orchestrator | 2026-03-29 04:27:36.880644 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-29 04:27:36.880656 | orchestrator | Sunday 29 March 2026 04:27:29 +0000 (0:00:02.781) 0:03:44.629 ********** 2026-03-29 04:27:36.880668 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:36.880681 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:36.880693 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:36.880706 | orchestrator | 2026-03-29 04:27:36.880719 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-29 04:27:36.880731 | orchestrator | Sunday 29 March 2026 04:27:30 +0000 (0:00:01.410) 0:03:46.039 ********** 2026-03-29 04:27:36.880744 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:27:36.880756 | orchestrator | 2026-03-29 04:27:36.880767 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-29 04:27:36.880778 | orchestrator | Sunday 29 March 2026 04:27:32 +0000 (0:00:01.802) 0:03:47.841 ********** 2026-03-29 04:27:36.880818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 04:27:38.557058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 04:27:38.557250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 04:27:38.557316 | orchestrator | 2026-03-29 04:27:38.557337 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-29 04:27:38.557350 | orchestrator | Sunday 29 March 2026 04:27:36 +0000 (0:00:04.620) 0:03:52.462 ********** 2026-03-29 04:27:38.557364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 04:27:38.557384 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:38.557498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 04:27:47.099795 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:47.099911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 04:27:47.099960 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:47.099972 | orchestrator | 2026-03-29 04:27:47.099984 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-29 04:27:47.099996 | orchestrator | Sunday 29 March 2026 04:27:38 +0000 (0:00:01.682) 0:03:54.144 ********** 2026-03-29 04:27:47.100023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 04:27:47.100147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:47.100187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 04:27:47.100296 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:47.100317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-29 04:27:47.100366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 04:27:47.100380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 04:27:47.100392 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:47.100406 | orchestrator | 2026-03-29 04:27:47.100418 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-29 04:27:47.100431 | orchestrator | Sunday 29 March 2026 04:27:40 +0000 (0:00:01.930) 0:03:56.075 ********** 2026-03-29 04:27:47.100445 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:47.100458 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:47.100471 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:47.100483 | orchestrator | 2026-03-29 04:27:47.100496 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-29 04:27:47.100509 | orchestrator | Sunday 29 March 2026 04:27:42 +0000 (0:00:02.221) 0:03:58.296 ********** 2026-03-29 04:27:47.100522 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:27:47.100534 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:27:47.100546 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:27:47.100558 | orchestrator | 2026-03-29 04:27:47.100571 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-29 04:27:47.100584 | orchestrator | Sunday 29 March 2026 04:27:45 +0000 (0:00:02.864) 0:04:01.161 ********** 2026-03-29 04:27:47.100646 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:47.100659 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:47.100672 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:47.100683 | orchestrator | 2026-03-29 04:27:47.100696 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-29 04:27:47.100708 | orchestrator | Sunday 29 March 2026 04:27:46 +0000 (0:00:01.321) 0:04:02.482 ********** 2026-03-29 04:27:47.100728 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:56.670371 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:56.670478 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:56.670492 | orchestrator | 2026-03-29 04:27:56.670502 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-29 04:27:56.670513 | orchestrator | Sunday 29 March 2026 04:27:48 +0000 (0:00:01.298) 0:04:03.780 ********** 2026-03-29 04:27:56.670540 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:27:56.670545 | orchestrator | 2026-03-29 04:27:56.670550 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-29 04:27:56.670555 | orchestrator | Sunday 29 March 2026 04:27:50 +0000 (0:00:01.910) 0:04:05.690 ********** 2026-03-29 04:27:56.670565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-29 04:27:56.670573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:56.670589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:56.670653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-29 04:27:56.670673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:56.670685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:56.670690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-29 04:27:56.670699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:56.670704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:56.670709 | orchestrator | 2026-03-29 04:27:56.670714 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-29 04:27:56.670720 | orchestrator | Sunday 29 March 2026 04:27:54 +0000 (0:00:04.576) 0:04:10.267 ********** 2026-03-29 04:27:56.670729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-29 04:27:58.350277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:58.350386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:58.350404 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:58.350439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-29 04:27:58.350454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:58.350489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:58.350500 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:58.350532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-29 04:27:58.350545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 04:27:58.350563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 04:27:58.350574 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:58.350586 | orchestrator | 2026-03-29 04:27:58.350630 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-29 04:27:58.350643 | orchestrator | Sunday 29 March 2026 04:27:56 +0000 (0:00:01.987) 0:04:12.254 ********** 2026-03-29 04:27:58.350656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350693 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:27:58.350704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350727 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:27:58.350738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-29 04:27:58.350761 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:27:58.350772 | orchestrator | 2026-03-29 04:27:58.350783 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-29 04:27:58.350802 | orchestrator | Sunday 29 March 2026 04:27:58 +0000 (0:00:01.679) 0:04:13.934 ********** 2026-03-29 04:28:13.296738 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:13.296854 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:13.296867 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:13.296878 | orchestrator | 2026-03-29 04:28:13.296890 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-29 04:28:13.296901 | orchestrator | Sunday 29 March 2026 04:28:00 +0000 (0:00:02.251) 0:04:16.185 ********** 2026-03-29 04:28:13.296911 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:13.296921 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:13.296931 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:13.296941 | orchestrator | 2026-03-29 04:28:13.296952 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-29 04:28:13.296963 | orchestrator | Sunday 29 March 2026 04:28:03 +0000 (0:00:03.096) 0:04:19.282 ********** 2026-03-29 04:28:13.296973 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:13.296985 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:13.296996 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:13.297006 | orchestrator | 2026-03-29 04:28:13.297017 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-29 04:28:13.297030 | orchestrator | Sunday 29 March 2026 04:28:05 +0000 (0:00:01.327) 0:04:20.609 ********** 2026-03-29 04:28:13.297055 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:28:13.297076 | orchestrator | 2026-03-29 04:28:13.297088 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-29 04:28:13.297100 | orchestrator | Sunday 29 March 2026 04:28:06 +0000 (0:00:01.751) 0:04:22.361 ********** 2026-03-29 04:28:13.297133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:13.297175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:13.297188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:13.297217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:13.297228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:13.297252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:13.297265 | orchestrator | 2026-03-29 04:28:13.297277 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-29 04:28:13.297290 | orchestrator | Sunday 29 March 2026 04:28:11 +0000 (0:00:04.865) 0:04:27.226 ********** 2026-03-29 04:28:13.297303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:13.297321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:25.890147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:25.890228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:25.890251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:25.890290 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:25.890305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:25.890313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:28:25.890320 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:25.890326 | orchestrator | 2026-03-29 04:28:25.890334 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-29 04:28:25.890340 | orchestrator | Sunday 29 March 2026 04:28:13 +0000 (0:00:01.654) 0:04:28.881 ********** 2026-03-29 04:28:25.890359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890372 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:25.890376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890389 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:25.890393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:25.890400 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:25.890404 | orchestrator | 2026-03-29 04:28:25.890408 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-29 04:28:25.890412 | orchestrator | Sunday 29 March 2026 04:28:15 +0000 (0:00:02.123) 0:04:31.004 ********** 2026-03-29 04:28:25.890415 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:25.890420 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:25.890424 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:25.890428 | orchestrator | 2026-03-29 04:28:25.890434 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-29 04:28:25.890438 | orchestrator | Sunday 29 March 2026 04:28:17 +0000 (0:00:02.278) 0:04:33.283 ********** 2026-03-29 04:28:25.890442 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:25.890446 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:25.890449 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:25.890453 | orchestrator | 2026-03-29 04:28:25.890457 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-29 04:28:25.890461 | orchestrator | Sunday 29 March 2026 04:28:20 +0000 (0:00:02.761) 0:04:36.045 ********** 2026-03-29 04:28:25.890465 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:28:25.890468 | orchestrator | 2026-03-29 04:28:25.890472 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-29 04:28:25.890476 | orchestrator | Sunday 29 March 2026 04:28:22 +0000 (0:00:02.000) 0:04:38.046 ********** 2026-03-29 04:28:25.890480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:25.890486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:25.890498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:27.610817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:28:27.610843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:27.610951 | orchestrator | 2026-03-29 04:28:27.610964 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-29 04:28:27.610976 | orchestrator | Sunday 29 March 2026 04:28:26 +0000 (0:00:04.548) 0:04:42.594 ********** 2026-03-29 04:28:27.610988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:27.611014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797208 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:30.797224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:30.797236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797318 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:30.797335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:28:30.797347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 04:28:30.797388 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:30.797400 | orchestrator | 2026-03-29 04:28:30.797412 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-29 04:28:30.797424 | orchestrator | Sunday 29 March 2026 04:28:28 +0000 (0:00:01.718) 0:04:44.312 ********** 2026-03-29 04:28:30.797437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:30.797451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:30.797464 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:30.797475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:30.797494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:45.777403 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:45.777548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:45.777582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:28:45.777605 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:45.777658 | orchestrator | 2026-03-29 04:28:45.777678 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-29 04:28:45.777699 | orchestrator | Sunday 29 March 2026 04:28:30 +0000 (0:00:02.066) 0:04:46.378 ********** 2026-03-29 04:28:45.777717 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:45.777737 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:45.777755 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:45.777773 | orchestrator | 2026-03-29 04:28:45.777791 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-29 04:28:45.777829 | orchestrator | Sunday 29 March 2026 04:28:33 +0000 (0:00:02.305) 0:04:48.684 ********** 2026-03-29 04:28:45.777849 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:28:45.777867 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:28:45.777884 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:28:45.777904 | orchestrator | 2026-03-29 04:28:45.777923 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-29 04:28:45.777943 | orchestrator | Sunday 29 March 2026 04:28:35 +0000 (0:00:02.844) 0:04:51.529 ********** 2026-03-29 04:28:45.777965 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:28:45.777988 | orchestrator | 2026-03-29 04:28:45.778008 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-29 04:28:45.778119 | orchestrator | Sunday 29 March 2026 04:28:38 +0000 (0:00:02.409) 0:04:53.938 ********** 2026-03-29 04:28:45.778140 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:28:45.778159 | orchestrator | 2026-03-29 04:28:45.778178 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-29 04:28:45.778233 | orchestrator | Sunday 29 March 2026 04:28:42 +0000 (0:00:03.962) 0:04:57.901 ********** 2026-03-29 04:28:45.778262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:45.778317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:28:45.778342 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:45.778412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:45.778454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:28:45.778474 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:45.778512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:49.223717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:28:49.223832 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:28:49.223850 | orchestrator | 2026-03-29 04:28:49.223862 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-29 04:28:49.223874 | orchestrator | Sunday 29 March 2026 04:28:45 +0000 (0:00:03.454) 0:05:01.355 ********** 2026-03-29 04:28:49.223913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:49.223928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:28:49.223940 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:28:49.223979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:49.224002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:28:49.224014 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:28:49.224027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:28:49.224048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 04:29:04.503400 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:04.503488 | orchestrator | 2026-03-29 04:29:04.503498 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-29 04:29:04.503506 | orchestrator | Sunday 29 March 2026 04:28:49 +0000 (0:00:03.450) 0:05:04.806 ********** 2026-03-29 04:29:04.503529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503565 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:04.503571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503584 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:04.503591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 04:29:04.503604 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:04.503610 | orchestrator | 2026-03-29 04:29:04.503617 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-29 04:29:04.503658 | orchestrator | Sunday 29 March 2026 04:28:52 +0000 (0:00:03.637) 0:05:08.443 ********** 2026-03-29 04:29:04.503671 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:29:04.503689 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:29:04.503696 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:29:04.503702 | orchestrator | 2026-03-29 04:29:04.503708 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-29 04:29:04.503715 | orchestrator | Sunday 29 March 2026 04:28:55 +0000 (0:00:02.974) 0:05:11.418 ********** 2026-03-29 04:29:04.503721 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:04.503727 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:04.503733 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:04.503740 | orchestrator | 2026-03-29 04:29:04.503749 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-29 04:29:04.503756 | orchestrator | Sunday 29 March 2026 04:28:58 +0000 (0:00:02.535) 0:05:13.954 ********** 2026-03-29 04:29:04.503762 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:04.503769 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:04.503775 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:04.503781 | orchestrator | 2026-03-29 04:29:04.503788 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-29 04:29:04.503794 | orchestrator | Sunday 29 March 2026 04:28:59 +0000 (0:00:01.380) 0:05:15.334 ********** 2026-03-29 04:29:04.503800 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:29:04.503806 | orchestrator | 2026-03-29 04:29:04.503812 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-29 04:29:04.503818 | orchestrator | Sunday 29 March 2026 04:29:01 +0000 (0:00:02.189) 0:05:17.524 ********** 2026-03-29 04:29:04.503825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:29:04.503833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:29:04.503840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:29:04.503847 | orchestrator | 2026-03-29 04:29:04.503858 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-29 04:29:04.503866 | orchestrator | Sunday 29 March 2026 04:29:04 +0000 (0:00:02.453) 0:05:19.978 ********** 2026-03-29 04:29:04.503877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:29:19.181208 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:19.181349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:29:19.181372 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:19.181384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:29:19.181394 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:19.181404 | orchestrator | 2026-03-29 04:29:19.181415 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-29 04:29:19.181426 | orchestrator | Sunday 29 March 2026 04:29:06 +0000 (0:00:01.684) 0:05:21.663 ********** 2026-03-29 04:29:19.181437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 04:29:19.181448 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:19.181458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 04:29:19.181468 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:19.181478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 04:29:19.181509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:19.181519 | orchestrator | 2026-03-29 04:29:19.181529 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-29 04:29:19.181538 | orchestrator | Sunday 29 March 2026 04:29:07 +0000 (0:00:01.402) 0:05:23.066 ********** 2026-03-29 04:29:19.181547 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:19.181557 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:19.181567 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:19.181576 | orchestrator | 2026-03-29 04:29:19.181586 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-29 04:29:19.181595 | orchestrator | Sunday 29 March 2026 04:29:08 +0000 (0:00:01.478) 0:05:24.544 ********** 2026-03-29 04:29:19.181605 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:19.181614 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:19.181624 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:19.181633 | orchestrator | 2026-03-29 04:29:19.181694 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-29 04:29:19.181704 | orchestrator | Sunday 29 March 2026 04:29:11 +0000 (0:00:02.465) 0:05:27.010 ********** 2026-03-29 04:29:19.181716 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:19.181727 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:19.181738 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:19.181749 | orchestrator | 2026-03-29 04:29:19.181760 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-29 04:29:19.181772 | orchestrator | Sunday 29 March 2026 04:29:12 +0000 (0:00:01.346) 0:05:28.356 ********** 2026-03-29 04:29:19.181783 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:29:19.181795 | orchestrator | 2026-03-29 04:29:19.181806 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-29 04:29:19.181817 | orchestrator | Sunday 29 March 2026 04:29:14 +0000 (0:00:01.945) 0:05:30.302 ********** 2026-03-29 04:29:19.181857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:19.181874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.181888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:19.181933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:19.181961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.318796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.318899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.318916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:19.318953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:19.318966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.318979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:19.319023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.319037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.319050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:19.319071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:19.319084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:19.319108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.394710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:19.394839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:19.394854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:19.394877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.394907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.394922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.394963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:19.394978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:19.395005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.395035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.479547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:19.479809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.479848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.479870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:19.479892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:19.479942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.479992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:19.480019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:19.480032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:19.480046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.480060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:19.480073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:19.480100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.506762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.506884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:21.506901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:21.506914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.506941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.507034 | orchestrator | 2026-03-29 04:29:21.507066 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-29 04:29:21.507078 | orchestrator | Sunday 29 March 2026 04:29:20 +0000 (0:00:05.864) 0:05:36.167 ********** 2026-03-29 04:29:21.507090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:21.507102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.507114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:21.507130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:21.507158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.568789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.568896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.568913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:21.568928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:21.568959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.569012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.569025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.569037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:21.569049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:21.569066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.569085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:21.569104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.635207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.635304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:21.635320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.635393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:21.635406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.635433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.635446 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:21.635459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.635470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:21.635492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-29 04:29:21.635503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.635521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-29 04:29:21.830441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.830549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.830607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:21.830623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.830674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.830699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:21.830741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.830761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-29 04:29:21.830781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.830839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:21.830854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:21.830866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:21.830887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-29 04:29:37.090779 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:37.090872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-29 04:29:37.090914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 04:29:37.090924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 04:29:37.090932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 04:29:37.090939 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:37.090946 | orchestrator | 2026-03-29 04:29:37.090953 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-29 04:29:37.090961 | orchestrator | Sunday 29 March 2026 04:29:22 +0000 (0:00:02.230) 0:05:38.397 ********** 2026-03-29 04:29:37.090968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.090978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.090986 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:37.090992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.091010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.091026 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:37.091032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.091038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:29:37.091045 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:37.091051 | orchestrator | 2026-03-29 04:29:37.091057 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-29 04:29:37.091064 | orchestrator | Sunday 29 March 2026 04:29:25 +0000 (0:00:02.391) 0:05:40.789 ********** 2026-03-29 04:29:37.091070 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:29:37.091077 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:29:37.091083 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:29:37.091090 | orchestrator | 2026-03-29 04:29:37.091096 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-29 04:29:37.091102 | orchestrator | Sunday 29 March 2026 04:29:27 +0000 (0:00:02.223) 0:05:43.012 ********** 2026-03-29 04:29:37.091108 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:29:37.091114 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:29:37.091120 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:29:37.091126 | orchestrator | 2026-03-29 04:29:37.091132 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-29 04:29:37.091142 | orchestrator | Sunday 29 March 2026 04:29:30 +0000 (0:00:02.927) 0:05:45.940 ********** 2026-03-29 04:29:37.091148 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:29:37.091154 | orchestrator | 2026-03-29 04:29:37.091160 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-29 04:29:37.091166 | orchestrator | Sunday 29 March 2026 04:29:32 +0000 (0:00:02.232) 0:05:48.172 ********** 2026-03-29 04:29:37.091173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:29:37.091181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:29:37.091198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:29:53.357608 | orchestrator | 2026-03-29 04:29:53.357730 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-29 04:29:53.357739 | orchestrator | Sunday 29 March 2026 04:29:37 +0000 (0:00:04.503) 0:05:52.675 ********** 2026-03-29 04:29:53.357757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:29:53.357764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:53.357769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:29:53.357774 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:53.357791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:29:53.357795 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:53.357799 | orchestrator | 2026-03-29 04:29:53.357803 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-29 04:29:53.357808 | orchestrator | Sunday 29 March 2026 04:29:38 +0000 (0:00:01.506) 0:05:54.182 ********** 2026-03-29 04:29:53.357813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357835 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:53.357839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357850 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:53.357854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:29:53.357862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:29:53.357865 | orchestrator | 2026-03-29 04:29:53.357869 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-29 04:29:53.357873 | orchestrator | Sunday 29 March 2026 04:29:40 +0000 (0:00:01.770) 0:05:55.952 ********** 2026-03-29 04:29:53.357877 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:29:53.357881 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:29:53.357885 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:29:53.357889 | orchestrator | 2026-03-29 04:29:53.357893 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-29 04:29:53.357896 | orchestrator | Sunday 29 March 2026 04:29:42 +0000 (0:00:02.289) 0:05:58.242 ********** 2026-03-29 04:29:53.357904 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:29:53.357908 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:29:53.357912 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:29:53.357916 | orchestrator | 2026-03-29 04:29:53.357919 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-29 04:29:53.357924 | orchestrator | Sunday 29 March 2026 04:29:45 +0000 (0:00:02.901) 0:06:01.143 ********** 2026-03-29 04:29:53.357928 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:29:53.357932 | orchestrator | 2026-03-29 04:29:53.357935 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-29 04:29:53.357939 | orchestrator | Sunday 29 March 2026 04:29:47 +0000 (0:00:02.243) 0:06:03.387 ********** 2026-03-29 04:29:53.357943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:53.357951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:54.510096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:54.510219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:54.510236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:54.510285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:29:54.510338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:29:54.510359 | orchestrator | 2026-03-29 04:29:54.510370 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-29 04:29:54.510389 | orchestrator | Sunday 29 March 2026 04:29:54 +0000 (0:00:06.710) 0:06:10.097 ********** 2026-03-29 04:29:55.272472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:55.272582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:55.272593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:29:55.272600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:29:55.272607 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:29:55.272634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:55.272641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:55.272654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:29:55.272661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:29:55.272708 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:29:55.272716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:29:55.272735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:30:15.083132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 04:30:15.083222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 04:30:15.083234 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:15.083243 | orchestrator | 2026-03-29 04:30:15.083250 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-29 04:30:15.083261 | orchestrator | Sunday 29 March 2026 04:29:56 +0000 (0:00:01.938) 0:06:12.036 ********** 2026-03-29 04:30:15.083272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083325 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:15.083335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083460 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:15.083468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:30:15.083481 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:15.083487 | orchestrator | 2026-03-29 04:30:15.083494 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-29 04:30:15.083500 | orchestrator | Sunday 29 March 2026 04:29:58 +0000 (0:00:02.519) 0:06:14.556 ********** 2026-03-29 04:30:15.083506 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:30:15.083514 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:30:15.083520 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:30:15.083527 | orchestrator | 2026-03-29 04:30:15.083533 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-29 04:30:15.083539 | orchestrator | Sunday 29 March 2026 04:30:01 +0000 (0:00:02.299) 0:06:16.855 ********** 2026-03-29 04:30:15.083545 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:30:15.083551 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:30:15.083557 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:30:15.083563 | orchestrator | 2026-03-29 04:30:15.083570 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-29 04:30:15.083576 | orchestrator | Sunday 29 March 2026 04:30:03 +0000 (0:00:02.515) 0:06:19.371 ********** 2026-03-29 04:30:15.083582 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:30:15.083588 | orchestrator | 2026-03-29 04:30:15.083594 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-29 04:30:15.083600 | orchestrator | Sunday 29 March 2026 04:30:06 +0000 (0:00:02.540) 0:06:21.911 ********** 2026-03-29 04:30:15.083607 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-29 04:30:15.083614 | orchestrator | 2026-03-29 04:30:15.083620 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-29 04:30:15.083627 | orchestrator | Sunday 29 March 2026 04:30:07 +0000 (0:00:01.561) 0:06:23.473 ********** 2026-03-29 04:30:15.083635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 04:30:15.083652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 04:30:15.083660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 04:30:15.083668 | orchestrator | 2026-03-29 04:30:15.083676 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-29 04:30:15.083688 | orchestrator | Sunday 29 March 2026 04:30:13 +0000 (0:00:05.139) 0:06:28.612 ********** 2026-03-29 04:30:15.083697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:15.083729 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:36.568826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.568953 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:36.568973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.568986 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:36.568998 | orchestrator | 2026-03-29 04:30:36.569010 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-29 04:30:36.569022 | orchestrator | Sunday 29 March 2026 04:30:15 +0000 (0:00:02.051) 0:06:30.664 ********** 2026-03-29 04:30:36.569034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569062 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:36.569102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569142 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:36.569160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 04:30:36.569196 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:36.569214 | orchestrator | 2026-03-29 04:30:36.569232 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 04:30:36.569251 | orchestrator | Sunday 29 March 2026 04:30:17 +0000 (0:00:02.118) 0:06:32.783 ********** 2026-03-29 04:30:36.569269 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:30:36.569288 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:30:36.569307 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:30:36.569327 | orchestrator | 2026-03-29 04:30:36.569346 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 04:30:36.569367 | orchestrator | Sunday 29 March 2026 04:30:20 +0000 (0:00:03.455) 0:06:36.239 ********** 2026-03-29 04:30:36.569388 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:30:36.569407 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:30:36.569426 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:30:36.569439 | orchestrator | 2026-03-29 04:30:36.569451 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-29 04:30:36.569481 | orchestrator | Sunday 29 March 2026 04:30:24 +0000 (0:00:03.516) 0:06:39.755 ********** 2026-03-29 04:30:36.569496 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-29 04:30:36.569511 | orchestrator | 2026-03-29 04:30:36.569523 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-29 04:30:36.569536 | orchestrator | Sunday 29 March 2026 04:30:25 +0000 (0:00:01.592) 0:06:41.348 ********** 2026-03-29 04:30:36.569570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569585 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:36.569598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569610 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:36.569635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569648 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:36.569660 | orchestrator | 2026-03-29 04:30:36.569673 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-29 04:30:36.569684 | orchestrator | Sunday 29 March 2026 04:30:28 +0000 (0:00:02.429) 0:06:43.777 ********** 2026-03-29 04:30:36.569695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569707 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:36.569718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569728 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:36.569765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 04:30:36.569779 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:36.569790 | orchestrator | 2026-03-29 04:30:36.569801 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-29 04:30:36.569818 | orchestrator | Sunday 29 March 2026 04:30:30 +0000 (0:00:02.598) 0:06:46.376 ********** 2026-03-29 04:30:36.569829 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:30:36.569840 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:30:36.569850 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:30:36.569861 | orchestrator | 2026-03-29 04:30:36.569872 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 04:30:36.569882 | orchestrator | Sunday 29 March 2026 04:30:33 +0000 (0:00:02.276) 0:06:48.652 ********** 2026-03-29 04:30:36.569893 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:30:36.569903 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:30:36.569914 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:30:36.569924 | orchestrator | 2026-03-29 04:30:36.569935 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 04:30:36.569946 | orchestrator | Sunday 29 March 2026 04:30:36 +0000 (0:00:03.494) 0:06:52.147 ********** 2026-03-29 04:31:03.133731 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:31:03.133892 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:31:03.133910 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:31:03.133946 | orchestrator | 2026-03-29 04:31:03.133974 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-29 04:31:03.133997 | orchestrator | Sunday 29 March 2026 04:30:40 +0000 (0:00:03.923) 0:06:56.071 ********** 2026-03-29 04:31:03.134008 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-29 04:31:03.134083 | orchestrator | 2026-03-29 04:31:03.134096 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-29 04:31:03.134107 | orchestrator | Sunday 29 March 2026 04:30:42 +0000 (0:00:02.211) 0:06:58.282 ********** 2026-03-29 04:31:03.134122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134136 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:03.134149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134161 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:03.134172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134183 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:03.134194 | orchestrator | 2026-03-29 04:31:03.134205 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-29 04:31:03.134221 | orchestrator | Sunday 29 March 2026 04:30:45 +0000 (0:00:02.391) 0:07:00.673 ********** 2026-03-29 04:31:03.134234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134245 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:03.134271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134286 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:03.134330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 04:31:03.134344 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:03.134356 | orchestrator | 2026-03-29 04:31:03.134369 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-29 04:31:03.134381 | orchestrator | Sunday 29 March 2026 04:30:47 +0000 (0:00:02.322) 0:07:02.996 ********** 2026-03-29 04:31:03.134394 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:03.134407 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:03.134420 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:03.134432 | orchestrator | 2026-03-29 04:31:03.134445 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 04:31:03.134458 | orchestrator | Sunday 29 March 2026 04:30:49 +0000 (0:00:02.222) 0:07:05.219 ********** 2026-03-29 04:31:03.134470 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:31:03.134483 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:31:03.134495 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:31:03.134508 | orchestrator | 2026-03-29 04:31:03.134521 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 04:31:03.134533 | orchestrator | Sunday 29 March 2026 04:30:52 +0000 (0:00:03.157) 0:07:08.377 ********** 2026-03-29 04:31:03.134546 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:31:03.134559 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:31:03.134571 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:31:03.134584 | orchestrator | 2026-03-29 04:31:03.134597 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-29 04:31:03.134610 | orchestrator | Sunday 29 March 2026 04:30:56 +0000 (0:00:04.212) 0:07:12.590 ********** 2026-03-29 04:31:03.134624 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:31:03.134636 | orchestrator | 2026-03-29 04:31:03.134647 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-29 04:31:03.134658 | orchestrator | Sunday 29 March 2026 04:30:59 +0000 (0:00:02.405) 0:07:14.995 ********** 2026-03-29 04:31:03.134671 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 04:31:03.134685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:03.134704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:03.134731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:05.151312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:05.151411 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 04:31:05.151428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 04:31:05.151467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:05.151495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:05.151524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:05.151539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:05.151551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:05.151562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:05.151574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:05.151593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:05.151605 | orchestrator | 2026-03-29 04:31:05.151618 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-29 04:31:05.151630 | orchestrator | Sunday 29 March 2026 04:31:04 +0000 (0:00:04.813) 0:07:19.808 ********** 2026-03-29 04:31:05.151681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 04:31:06.271925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:06.272005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:06.272015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:06.272041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:06.272048 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:06.272067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 04:31:06.272076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:06.272095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:06.272102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:06.272108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:06.272119 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:06.272126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 04:31:06.272135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 04:31:06.272145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 04:31:22.740313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 04:31:22.740461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 04:31:22.740519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:22.740544 | orchestrator | 2026-03-29 04:31:22.740565 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-29 04:31:22.740583 | orchestrator | Sunday 29 March 2026 04:31:06 +0000 (0:00:02.051) 0:07:21.860 ********** 2026-03-29 04:31:22.740602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740645 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:22.740664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740702 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:22.740718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 04:31:22.740740 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:22.740751 | orchestrator | 2026-03-29 04:31:22.740778 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-29 04:31:22.740820 | orchestrator | Sunday 29 March 2026 04:31:08 +0000 (0:00:02.068) 0:07:23.929 ********** 2026-03-29 04:31:22.740834 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:31:22.740848 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:31:22.740860 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:31:22.740873 | orchestrator | 2026-03-29 04:31:22.740886 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-29 04:31:22.740898 | orchestrator | Sunday 29 March 2026 04:31:10 +0000 (0:00:02.276) 0:07:26.206 ********** 2026-03-29 04:31:22.740910 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:31:22.740923 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:31:22.740935 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:31:22.740947 | orchestrator | 2026-03-29 04:31:22.740959 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-29 04:31:22.740972 | orchestrator | Sunday 29 March 2026 04:31:13 +0000 (0:00:02.871) 0:07:29.078 ********** 2026-03-29 04:31:22.740985 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:31:22.740998 | orchestrator | 2026-03-29 04:31:22.741011 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-29 04:31:22.741024 | orchestrator | Sunday 29 March 2026 04:31:15 +0000 (0:00:02.464) 0:07:31.542 ********** 2026-03-29 04:31:22.741062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:22.741089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:22.741103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:22.741124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:31:22.741150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:31:26.650528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:31:26.650617 | orchestrator | 2026-03-29 04:31:26.650631 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-29 04:31:26.650641 | orchestrator | Sunday 29 March 2026 04:31:22 +0000 (0:00:06.778) 0:07:38.321 ********** 2026-03-29 04:31:26.650663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:26.650674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:31:26.650712 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:26.650739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:26.650750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:31:26.650760 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:26.650773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:26.650783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:31:26.650843 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:26.650855 | orchestrator | 2026-03-29 04:31:26.650864 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-29 04:31:26.650873 | orchestrator | Sunday 29 March 2026 04:31:24 +0000 (0:00:02.123) 0:07:40.444 ********** 2026-03-29 04:31:26.650884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:26.650901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508292 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:35.508303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:35.508310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508323 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:35.508329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:35.508334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-29 04:31:35.508358 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:35.508363 | orchestrator | 2026-03-29 04:31:35.508369 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-29 04:31:35.508376 | orchestrator | Sunday 29 March 2026 04:31:26 +0000 (0:00:01.794) 0:07:42.239 ********** 2026-03-29 04:31:35.508382 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:35.508405 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:35.508411 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:35.508417 | orchestrator | 2026-03-29 04:31:35.508423 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-29 04:31:35.508428 | orchestrator | Sunday 29 March 2026 04:31:28 +0000 (0:00:01.533) 0:07:43.772 ********** 2026-03-29 04:31:35.508433 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:35.508439 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:35.508444 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:35.508450 | orchestrator | 2026-03-29 04:31:35.508455 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-29 04:31:35.508461 | orchestrator | Sunday 29 March 2026 04:31:30 +0000 (0:00:02.287) 0:07:46.060 ********** 2026-03-29 04:31:35.508466 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:31:35.508472 | orchestrator | 2026-03-29 04:31:35.508478 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-29 04:31:35.508483 | orchestrator | Sunday 29 March 2026 04:31:32 +0000 (0:00:02.514) 0:07:48.574 ********** 2026-03-29 04:31:35.508503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-29 04:31:35.508511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:35.508519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:35.508526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:35.508536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:35.508548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-29 04:31:35.508554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:35.508565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-29 04:31:37.454771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.454974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:37.455038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.455060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.455083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:37.455104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.455123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:37.455174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:37.455223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:37.455247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.455269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:37.455283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:37.455305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:39.504123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:39.504282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:31:39.504303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.504317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:39.504329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.504367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.504386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:39.504398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.504410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:39.504422 | orchestrator | 2026-03-29 04:31:39.504435 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-29 04:31:39.504447 | orchestrator | Sunday 29 March 2026 04:31:38 +0000 (0:00:05.658) 0:07:54.233 ********** 2026-03-29 04:31:39.504459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-29 04:31:39.504472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:39.504498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:39.669408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:39.669423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:39.669436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:39.669523 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:39.669537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-29 04:31:39.669549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:39.669561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:39.669592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:39.669619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:40.858256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:40.858383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:40.859214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:40.859264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-29 04:31:40.859333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:40.859375 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:40.859430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 04:31:40.859455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:40.859475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:40.859495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 04:31:40.859518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:31:40.859561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-29 04:31:40.859597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:52.814139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:31:52.814253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 04:31:52.814271 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:52.814285 | orchestrator | 2026-03-29 04:31:52.814297 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-29 04:31:52.814309 | orchestrator | Sunday 29 March 2026 04:31:40 +0000 (0:00:02.215) 0:07:56.448 ********** 2026-03-29 04:31:52.814321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814402 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:52.814413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814493 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:52.814504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-29 04:31:52.814527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-29 04:31:52.814557 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:52.814568 | orchestrator | 2026-03-29 04:31:52.814580 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-29 04:31:52.814593 | orchestrator | Sunday 29 March 2026 04:31:42 +0000 (0:00:01.861) 0:07:58.310 ********** 2026-03-29 04:31:52.814606 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:52.814618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:52.814631 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:52.814644 | orchestrator | 2026-03-29 04:31:52.814656 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-29 04:31:52.814668 | orchestrator | Sunday 29 March 2026 04:31:44 +0000 (0:00:01.915) 0:08:00.226 ********** 2026-03-29 04:31:52.814681 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:31:52.814693 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:31:52.814706 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:31:52.814718 | orchestrator | 2026-03-29 04:31:52.814730 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-29 04:31:52.814742 | orchestrator | Sunday 29 March 2026 04:31:46 +0000 (0:00:02.231) 0:08:02.457 ********** 2026-03-29 04:31:52.814755 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:31:52.814767 | orchestrator | 2026-03-29 04:31:52.814779 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-29 04:31:52.814792 | orchestrator | Sunday 29 March 2026 04:31:49 +0000 (0:00:02.202) 0:08:04.660 ********** 2026-03-29 04:31:52.814852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:31:52.814893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:32:09.840546 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:32:09.840671 | orchestrator | 2026-03-29 04:32:09.840689 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-29 04:32:09.840701 | orchestrator | Sunday 29 March 2026 04:31:52 +0000 (0:00:03.730) 0:08:08.390 ********** 2026-03-29 04:32:09.840714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:32:09.840727 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:09.840757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:32:09.840770 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:09.840798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:32:09.840888 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:09.840902 | orchestrator | 2026-03-29 04:32:09.840913 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-29 04:32:09.840924 | orchestrator | Sunday 29 March 2026 04:31:54 +0000 (0:00:01.442) 0:08:09.832 ********** 2026-03-29 04:32:09.840936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 04:32:09.840948 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:09.840959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 04:32:09.840970 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:09.840981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 04:32:09.840992 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:09.841003 | orchestrator | 2026-03-29 04:32:09.841014 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-29 04:32:09.841025 | orchestrator | Sunday 29 March 2026 04:31:55 +0000 (0:00:01.474) 0:08:11.307 ********** 2026-03-29 04:32:09.841035 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:09.841046 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:09.841057 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:09.841070 | orchestrator | 2026-03-29 04:32:09.841083 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-29 04:32:09.841096 | orchestrator | Sunday 29 March 2026 04:31:57 +0000 (0:00:01.925) 0:08:13.233 ********** 2026-03-29 04:32:09.841108 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:09.841121 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:09.841134 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:09.841147 | orchestrator | 2026-03-29 04:32:09.841160 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-29 04:32:09.841172 | orchestrator | Sunday 29 March 2026 04:31:59 +0000 (0:00:02.260) 0:08:15.493 ********** 2026-03-29 04:32:09.841185 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:32:09.841197 | orchestrator | 2026-03-29 04:32:09.841210 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-29 04:32:09.841223 | orchestrator | Sunday 29 March 2026 04:32:02 +0000 (0:00:02.255) 0:08:17.748 ********** 2026-03-29 04:32:09.841237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-29 04:32:09.841267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-29 04:32:09.841292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-29 04:32:11.546546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:32:11.546646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:32:11.546699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-29 04:32:11.546720 | orchestrator | 2026-03-29 04:32:11.546738 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-29 04:32:11.546758 | orchestrator | Sunday 29 March 2026 04:32:09 +0000 (0:00:07.677) 0:08:25.426 ********** 2026-03-29 04:32:11.546801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-29 04:32:11.546911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:32:11.546928 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:11.546944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-29 04:32:11.546970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:32:11.546988 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:11.547020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-29 04:32:32.797765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-29 04:32:32.797983 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798005 | orchestrator | 2026-03-29 04:32:32.798066 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-29 04:32:32.798079 | orchestrator | Sunday 29 March 2026 04:32:11 +0000 (0:00:01.705) 0:08:27.131 ********** 2026-03-29 04:32:32.798091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798151 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798201 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-29 04:32:32.798250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-29 04:32:32.798279 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798289 | orchestrator | 2026-03-29 04:32:32.798299 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-29 04:32:32.798311 | orchestrator | Sunday 29 March 2026 04:32:13 +0000 (0:00:02.078) 0:08:29.210 ********** 2026-03-29 04:32:32.798322 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:32:32.798334 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:32:32.798345 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:32:32.798355 | orchestrator | 2026-03-29 04:32:32.798367 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-29 04:32:32.798378 | orchestrator | Sunday 29 March 2026 04:32:15 +0000 (0:00:02.311) 0:08:31.521 ********** 2026-03-29 04:32:32.798390 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:32:32.798401 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:32:32.798412 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:32:32.798423 | orchestrator | 2026-03-29 04:32:32.798434 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-29 04:32:32.798445 | orchestrator | Sunday 29 March 2026 04:32:18 +0000 (0:00:02.958) 0:08:34.479 ********** 2026-03-29 04:32:32.798456 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798467 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798478 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798489 | orchestrator | 2026-03-29 04:32:32.798499 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-29 04:32:32.798510 | orchestrator | Sunday 29 March 2026 04:32:20 +0000 (0:00:01.354) 0:08:35.834 ********** 2026-03-29 04:32:32.798522 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798533 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798560 | orchestrator | 2026-03-29 04:32:32.798571 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-29 04:32:32.798582 | orchestrator | Sunday 29 March 2026 04:32:21 +0000 (0:00:01.342) 0:08:37.177 ********** 2026-03-29 04:32:32.798593 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798605 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798615 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798626 | orchestrator | 2026-03-29 04:32:32.798637 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-29 04:32:32.798648 | orchestrator | Sunday 29 March 2026 04:32:23 +0000 (0:00:01.614) 0:08:38.791 ********** 2026-03-29 04:32:32.798659 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798669 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798679 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798689 | orchestrator | 2026-03-29 04:32:32.798698 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-29 04:32:32.798708 | orchestrator | Sunday 29 March 2026 04:32:24 +0000 (0:00:01.329) 0:08:40.121 ********** 2026-03-29 04:32:32.798718 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:32.798728 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:32:32.798737 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:32:32.798747 | orchestrator | 2026-03-29 04:32:32.798757 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-29 04:32:32.798766 | orchestrator | Sunday 29 March 2026 04:32:25 +0000 (0:00:01.409) 0:08:41.530 ********** 2026-03-29 04:32:32.798776 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:32:32.798786 | orchestrator | 2026-03-29 04:32:32.798796 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-29 04:32:32.798806 | orchestrator | Sunday 29 March 2026 04:32:28 +0000 (0:00:02.734) 0:08:44.265 ********** 2026-03-29 04:32:32.798817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 04:32:32.798843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 04:32:36.598341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 04:32:36.598450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:32:36.598485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:32:36.598498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 04:32:36.598511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:32:36.598546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:32:36.598582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 04:32:36.598605 | orchestrator | 2026-03-29 04:32:36.598626 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-29 04:32:36.598644 | orchestrator | Sunday 29 March 2026 04:32:32 +0000 (0:00:04.116) 0:08:48.381 ********** 2026-03-29 04:32:36.598664 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:32:36.598682 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:32:36.598702 | orchestrator | } 2026-03-29 04:32:36.598721 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:32:36.598741 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:32:36.598760 | orchestrator | } 2026-03-29 04:32:36.598779 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:32:36.598793 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:32:36.598804 | orchestrator | } 2026-03-29 04:32:36.598815 | orchestrator | 2026-03-29 04:32:36.598826 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:32:36.598837 | orchestrator | Sunday 29 March 2026 04:32:34 +0000 (0:00:01.425) 0:08:49.807 ********** 2026-03-29 04:32:36.598849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 04:32:36.598900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:32:36.598931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:32:36.598965 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:32:36.598980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 04:32:36.598993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:32:36.599019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:34:37.546313 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.546468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 04:34:37.546524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 04:34:37.546540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 04:34:37.546578 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.546591 | orchestrator | 2026-03-29 04:34:37.546603 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-29 04:34:37.546615 | orchestrator | Sunday 29 March 2026 04:32:36 +0000 (0:00:02.369) 0:08:52.177 ********** 2026-03-29 04:34:37.546626 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.546637 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.546648 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.546659 | orchestrator | 2026-03-29 04:34:37.546670 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-29 04:34:37.546681 | orchestrator | Sunday 29 March 2026 04:32:38 +0000 (0:00:01.773) 0:08:53.951 ********** 2026-03-29 04:34:37.546691 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.546702 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.546712 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.546723 | orchestrator | 2026-03-29 04:34:37.546734 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-29 04:34:37.546744 | orchestrator | Sunday 29 March 2026 04:32:39 +0000 (0:00:01.387) 0:08:55.338 ********** 2026-03-29 04:34:37.546755 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.546766 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.546776 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.546787 | orchestrator | 2026-03-29 04:34:37.546798 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-29 04:34:37.546808 | orchestrator | Sunday 29 March 2026 04:32:46 +0000 (0:00:07.052) 0:09:02.391 ********** 2026-03-29 04:34:37.546821 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.546839 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.546858 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.546879 | orchestrator | 2026-03-29 04:34:37.546898 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-29 04:34:37.546917 | orchestrator | Sunday 29 March 2026 04:32:54 +0000 (0:00:07.307) 0:09:09.699 ********** 2026-03-29 04:34:37.546930 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.546943 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.546955 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.546968 | orchestrator | 2026-03-29 04:34:37.547042 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-29 04:34:37.547055 | orchestrator | Sunday 29 March 2026 04:33:01 +0000 (0:00:07.057) 0:09:16.756 ********** 2026-03-29 04:34:37.547067 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.547079 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.547092 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.547104 | orchestrator | 2026-03-29 04:34:37.547132 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-29 04:34:37.547145 | orchestrator | Sunday 29 March 2026 04:33:08 +0000 (0:00:07.549) 0:09:24.306 ********** 2026-03-29 04:34:37.547169 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.547183 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.547195 | orchestrator | 2026-03-29 04:34:37.547208 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-29 04:34:37.547219 | orchestrator | Sunday 29 March 2026 04:33:12 +0000 (0:00:03.743) 0:09:28.050 ********** 2026-03-29 04:34:37.547229 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.547240 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.547251 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.547261 | orchestrator | 2026-03-29 04:34:37.547291 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-29 04:34:37.547302 | orchestrator | Sunday 29 March 2026 04:33:25 +0000 (0:00:13.471) 0:09:41.521 ********** 2026-03-29 04:34:37.547313 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.547335 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.547346 | orchestrator | 2026-03-29 04:34:37.547356 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-29 04:34:37.547367 | orchestrator | Sunday 29 March 2026 04:33:30 +0000 (0:00:04.632) 0:09:46.154 ********** 2026-03-29 04:34:37.547378 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:37.547389 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:34:37.547400 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:34:37.547411 | orchestrator | 2026-03-29 04:34:37.547421 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-29 04:34:37.547432 | orchestrator | Sunday 29 March 2026 04:33:37 +0000 (0:00:07.242) 0:09:53.396 ********** 2026-03-29 04:34:37.547443 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.547455 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.547475 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.547493 | orchestrator | 2026-03-29 04:34:37.547511 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-29 04:34:37.547529 | orchestrator | Sunday 29 March 2026 04:33:44 +0000 (0:00:06.813) 0:10:00.210 ********** 2026-03-29 04:34:37.547547 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.547565 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.547582 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.547599 | orchestrator | 2026-03-29 04:34:37.547617 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-29 04:34:37.547643 | orchestrator | Sunday 29 March 2026 04:33:51 +0000 (0:00:06.821) 0:10:07.032 ********** 2026-03-29 04:34:37.547664 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.547683 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.547703 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.547723 | orchestrator | 2026-03-29 04:34:37.547740 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-29 04:34:37.547756 | orchestrator | Sunday 29 March 2026 04:33:58 +0000 (0:00:06.854) 0:10:13.886 ********** 2026-03-29 04:34:37.547768 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.547778 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.547789 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.547799 | orchestrator | 2026-03-29 04:34:37.547810 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-29 04:34:37.547820 | orchestrator | Sunday 29 March 2026 04:34:05 +0000 (0:00:07.203) 0:10:21.090 ********** 2026-03-29 04:34:37.547831 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.547841 | orchestrator | 2026-03-29 04:34:37.547852 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-29 04:34:37.547862 | orchestrator | Sunday 29 March 2026 04:34:09 +0000 (0:00:03.579) 0:10:24.669 ********** 2026-03-29 04:34:37.547873 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.547883 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.547894 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.547904 | orchestrator | 2026-03-29 04:34:37.547915 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-29 04:34:37.547926 | orchestrator | Sunday 29 March 2026 04:34:22 +0000 (0:00:12.984) 0:10:37.654 ********** 2026-03-29 04:34:37.547936 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.547946 | orchestrator | 2026-03-29 04:34:37.547957 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-29 04:34:37.547968 | orchestrator | Sunday 29 March 2026 04:34:25 +0000 (0:00:03.612) 0:10:41.267 ********** 2026-03-29 04:34:37.548006 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:37.548017 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:37.548028 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:34:37.548039 | orchestrator | 2026-03-29 04:34:37.548049 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-29 04:34:37.548060 | orchestrator | Sunday 29 March 2026 04:34:32 +0000 (0:00:06.787) 0:10:48.054 ********** 2026-03-29 04:34:37.548080 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.548093 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.548110 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.548139 | orchestrator | 2026-03-29 04:34:37.548157 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-29 04:34:37.548174 | orchestrator | Sunday 29 March 2026 04:34:34 +0000 (0:00:02.061) 0:10:50.115 ********** 2026-03-29 04:34:37.548192 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:37.548207 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:37.548223 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:37.548240 | orchestrator | 2026-03-29 04:34:37.548260 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:34:37.548281 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-29 04:34:37.548300 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-29 04:34:37.548317 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-29 04:34:37.548329 | orchestrator | 2026-03-29 04:34:37.548339 | orchestrator | 2026-03-29 04:34:37.548350 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:34:37.548361 | orchestrator | Sunday 29 March 2026 04:34:37 +0000 (0:00:02.997) 0:10:53.113 ********** 2026-03-29 04:34:37.548371 | orchestrator | =============================================================================== 2026-03-29 04:34:37.548382 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.47s 2026-03-29 04:34:37.548393 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.98s 2026-03-29 04:34:37.548403 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.68s 2026-03-29 04:34:37.548434 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.55s 2026-03-29 04:34:38.472885 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.31s 2026-03-29 04:34:38.473108 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.24s 2026-03-29 04:34:38.473138 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.20s 2026-03-29 04:34:38.473157 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.06s 2026-03-29 04:34:38.473174 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.05s 2026-03-29 04:34:38.473186 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.85s 2026-03-29 04:34:38.473197 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.82s 2026-03-29 04:34:38.473208 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.81s 2026-03-29 04:34:38.473219 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.79s 2026-03-29 04:34:38.473229 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.78s 2026-03-29 04:34:38.473240 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.71s 2026-03-29 04:34:38.473250 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.86s 2026-03-29 04:34:38.473288 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.76s 2026-03-29 04:34:38.473319 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.66s 2026-03-29 04:34:38.473330 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.14s 2026-03-29 04:34:38.473341 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.02s 2026-03-29 04:34:38.757367 | orchestrator | + osism apply -a upgrade opensearch 2026-03-29 04:34:40.778139 | orchestrator | 2026-03-29 04:34:40 | INFO  | Task caa82570-f9c1-40ed-9c86-68c63ede7e90 (opensearch) was prepared for execution. 2026-03-29 04:34:40.778254 | orchestrator | 2026-03-29 04:34:40 | INFO  | It takes a moment until task caa82570-f9c1-40ed-9c86-68c63ede7e90 (opensearch) has been started and output is visible here. 2026-03-29 04:34:50.866791 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-29 04:34:50.866884 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-29 04:34:50.866904 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-29 04:34:50.866912 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-29 04:34:50.866927 | orchestrator | 2026-03-29 04:34:50.866935 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:34:50.866942 | orchestrator | 2026-03-29 04:34:50.866950 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:34:50.866957 | orchestrator | Sunday 29 March 2026 04:34:45 +0000 (0:00:00.956) 0:00:00.956 ********** 2026-03-29 04:34:50.866965 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:34:50.866973 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:34:50.867044 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:34:50.867054 | orchestrator | 2026-03-29 04:34:50.867062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:34:50.867069 | orchestrator | Sunday 29 March 2026 04:34:46 +0000 (0:00:00.799) 0:00:01.755 ********** 2026-03-29 04:34:50.867077 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-29 04:34:50.867084 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-29 04:34:50.867091 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-29 04:34:50.867099 | orchestrator | 2026-03-29 04:34:50.867106 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-29 04:34:50.867113 | orchestrator | 2026-03-29 04:34:50.867120 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 04:34:50.867127 | orchestrator | Sunday 29 March 2026 04:34:46 +0000 (0:00:00.760) 0:00:02.516 ********** 2026-03-29 04:34:50.867135 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:34:50.867143 | orchestrator | 2026-03-29 04:34:50.867150 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-29 04:34:50.867158 | orchestrator | Sunday 29 March 2026 04:34:47 +0000 (0:00:01.019) 0:00:03.535 ********** 2026-03-29 04:34:50.867165 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 04:34:50.867173 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 04:34:50.867180 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 04:34:50.867187 | orchestrator | 2026-03-29 04:34:50.867194 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-29 04:34:50.867201 | orchestrator | Sunday 29 March 2026 04:34:49 +0000 (0:00:01.779) 0:00:05.314 ********** 2026-03-29 04:34:50.867212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:50.867256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:50.867280 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:50.867291 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:50.867300 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:50.867328 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:54.862966 | orchestrator | 2026-03-29 04:34:54.863132 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 04:34:54.863151 | orchestrator | Sunday 29 March 2026 04:34:50 +0000 (0:00:01.266) 0:00:06.580 ********** 2026-03-29 04:34:54.863164 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:34:54.863176 | orchestrator | 2026-03-29 04:34:54.863187 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-29 04:34:54.863198 | orchestrator | Sunday 29 March 2026 04:34:51 +0000 (0:00:00.838) 0:00:07.419 ********** 2026-03-29 04:34:54.863211 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:54.863226 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:54.863263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:54.863310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:54.863327 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:54.863341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:54.863361 | orchestrator | 2026-03-29 04:34:54.863372 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-29 04:34:54.863383 | orchestrator | Sunday 29 March 2026 04:34:54 +0000 (0:00:02.447) 0:00:09.867 ********** 2026-03-29 04:34:54.863400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:54.863422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:55.879667 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:55.879780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:55.879803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:55.879847 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:55.879879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:55.879913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:55.879930 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:55.879944 | orchestrator | 2026-03-29 04:34:55.879959 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-29 04:34:55.879975 | orchestrator | Sunday 29 March 2026 04:34:54 +0000 (0:00:00.715) 0:00:10.583 ********** 2026-03-29 04:34:55.880066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:55.880093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:55.880107 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:34:55.880127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:55.880152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:34:58.611344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:58.611531 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:34:58.611578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:34:58.611596 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:34:58.611606 | orchestrator | 2026-03-29 04:34:58.611616 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-29 04:34:58.611626 | orchestrator | Sunday 29 March 2026 04:34:55 +0000 (0:00:01.010) 0:00:11.594 ********** 2026-03-29 04:34:58.611635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:58.611663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:58.611681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:34:58.611695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:58.611706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:34:58.611724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:35:07.239026 | orchestrator | 2026-03-29 04:35:07.239129 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-29 04:35:07.239146 | orchestrator | Sunday 29 March 2026 04:34:58 +0000 (0:00:02.731) 0:00:14.325 ********** 2026-03-29 04:35:07.239158 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:35:07.239169 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:35:07.239180 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:35:07.239191 | orchestrator | 2026-03-29 04:35:07.239202 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-29 04:35:07.239213 | orchestrator | Sunday 29 March 2026 04:35:00 +0000 (0:00:02.355) 0:00:16.680 ********** 2026-03-29 04:35:07.239223 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:35:07.239234 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:35:07.239245 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:35:07.239255 | orchestrator | 2026-03-29 04:35:07.239266 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-29 04:35:07.239277 | orchestrator | Sunday 29 March 2026 04:35:02 +0000 (0:00:01.997) 0:00:18.678 ********** 2026-03-29 04:35:07.239291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:35:07.239319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:35:07.239331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-29 04:35:07.239381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:35:07.239401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:35:07.239415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-29 04:35:07.239427 | orchestrator | 2026-03-29 04:35:07.239439 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-29 04:35:07.239457 | orchestrator | Sunday 29 March 2026 04:35:05 +0000 (0:00:02.689) 0:00:21.367 ********** 2026-03-29 04:35:07.239469 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:35:07.239481 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:35:07.239492 | orchestrator | } 2026-03-29 04:35:07.239503 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:35:07.239514 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:35:07.239525 | orchestrator | } 2026-03-29 04:35:07.239536 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:35:07.239547 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:35:07.239560 | orchestrator | } 2026-03-29 04:35:07.239573 | orchestrator | 2026-03-29 04:35:07.239586 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:35:07.239599 | orchestrator | Sunday 29 March 2026 04:35:06 +0000 (0:00:00.371) 0:00:21.739 ********** 2026-03-29 04:35:07.239620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:38:13.355615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:38:13.355763 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:38:13.355800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:38:13.355849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:38:13.355862 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:38:13.355894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-29 04:38:13.355913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-29 04:38:13.355926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:38:13.355938 | orchestrator | 2026-03-29 04:38:13.355950 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 04:38:13.355962 | orchestrator | Sunday 29 March 2026 04:35:07 +0000 (0:00:01.217) 0:00:22.957 ********** 2026-03-29 04:38:13.355973 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:38:13.355985 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-29 04:38:13.355997 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-29 04:38:13.356033 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:38:13.356044 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:38:13.356054 | orchestrator | 2026-03-29 04:38:13.356065 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 04:38:13.356077 | orchestrator | Sunday 29 March 2026 04:35:07 +0000 (0:00:00.466) 0:00:23.424 ********** 2026-03-29 04:38:13.356174 | orchestrator | 2026-03-29 04:38:13.356188 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 04:38:13.356201 | orchestrator | Sunday 29 March 2026 04:35:07 +0000 (0:00:00.072) 0:00:23.496 ********** 2026-03-29 04:38:13.356213 | orchestrator | 2026-03-29 04:38:13.356226 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 04:38:13.356238 | orchestrator | Sunday 29 March 2026 04:35:07 +0000 (0:00:00.071) 0:00:23.567 ********** 2026-03-29 04:38:13.356251 | orchestrator | 2026-03-29 04:38:13.356263 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-29 04:38:13.356276 | orchestrator | Sunday 29 March 2026 04:35:07 +0000 (0:00:00.071) 0:00:23.639 ********** 2026-03-29 04:38:13.356289 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:38:13.356302 | orchestrator | 2026-03-29 04:38:13.356314 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-29 04:38:13.356327 | orchestrator | Sunday 29 March 2026 04:35:10 +0000 (0:00:02.298) 0:00:25.937 ********** 2026-03-29 04:38:13.356339 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:38:13.356351 | orchestrator | 2026-03-29 04:38:13.356364 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-29 04:38:13.356376 | orchestrator | Sunday 29 March 2026 04:35:18 +0000 (0:00:08.724) 0:00:34.662 ********** 2026-03-29 04:38:13.356388 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:38:13.356400 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:38:13.356413 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:38:13.356426 | orchestrator | 2026-03-29 04:38:13.356438 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-29 04:38:13.356451 | orchestrator | Sunday 29 March 2026 04:36:29 +0000 (0:01:10.529) 0:01:45.192 ********** 2026-03-29 04:38:13.356463 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:38:13.356476 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:38:13.356488 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:38:13.356500 | orchestrator | 2026-03-29 04:38:13.356510 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 04:38:13.356523 | orchestrator | Sunday 29 March 2026 04:38:07 +0000 (0:01:38.115) 0:03:23.307 ********** 2026-03-29 04:38:13.356543 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:38:13.356560 | orchestrator | 2026-03-29 04:38:13.356579 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-29 04:38:13.356599 | orchestrator | Sunday 29 March 2026 04:38:08 +0000 (0:00:00.934) 0:03:24.242 ********** 2026-03-29 04:38:13.356618 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:38:13.356636 | orchestrator | 2026-03-29 04:38:13.356656 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-29 04:38:13.356668 | orchestrator | Sunday 29 March 2026 04:38:10 +0000 (0:00:02.396) 0:03:26.638 ********** 2026-03-29 04:38:13.356679 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:38:13.356689 | orchestrator | 2026-03-29 04:38:13.356709 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-29 04:38:15.438283 | orchestrator | Sunday 29 March 2026 04:38:13 +0000 (0:00:02.429) 0:03:29.068 ********** 2026-03-29 04:38:15.438384 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:38:15.438401 | orchestrator | 2026-03-29 04:38:15.438414 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-29 04:38:15.438426 | orchestrator | Sunday 29 March 2026 04:38:13 +0000 (0:00:00.219) 0:03:29.288 ********** 2026-03-29 04:38:15.438463 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:38:15.438474 | orchestrator | 2026-03-29 04:38:15.438485 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:38:15.438498 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:38:15.438510 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:38:15.438521 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:38:15.438533 | orchestrator | 2026-03-29 04:38:15.438544 | orchestrator | 2026-03-29 04:38:15.438554 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:38:15.438565 | orchestrator | Sunday 29 March 2026 04:38:15 +0000 (0:00:01.549) 0:03:30.837 ********** 2026-03-29 04:38:15.438576 | orchestrator | =============================================================================== 2026-03-29 04:38:15.438587 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 98.11s 2026-03-29 04:38:15.438597 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.53s 2026-03-29 04:38:15.438622 | orchestrator | opensearch : Perform a flush -------------------------------------------- 8.72s 2026-03-29 04:38:15.438633 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.73s 2026-03-29 04:38:15.438644 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.69s 2026-03-29 04:38:15.438654 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.45s 2026-03-29 04:38:15.438665 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.43s 2026-03-29 04:38:15.438675 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.40s 2026-03-29 04:38:15.438686 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.36s 2026-03-29 04:38:15.438697 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.30s 2026-03-29 04:38:15.438745 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.00s 2026-03-29 04:38:15.438756 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.78s 2026-03-29 04:38:15.438767 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.55s 2026-03-29 04:38:15.438777 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.27s 2026-03-29 04:38:15.438789 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.22s 2026-03-29 04:38:15.438802 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.02s 2026-03-29 04:38:15.438814 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.01s 2026-03-29 04:38:15.438826 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.94s 2026-03-29 04:38:15.438839 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.84s 2026-03-29 04:38:15.438851 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-03-29 04:38:15.701480 | orchestrator | + osism apply -a upgrade memcached 2026-03-29 04:38:17.639577 | orchestrator | 2026-03-29 04:38:17 | INFO  | Task eb8de5f7-e711-4a9b-be4b-28bfa74dcc14 (memcached) was prepared for execution. 2026-03-29 04:38:17.639688 | orchestrator | 2026-03-29 04:38:17 | INFO  | It takes a moment until task eb8de5f7-e711-4a9b-be4b-28bfa74dcc14 (memcached) has been started and output is visible here. 2026-03-29 04:38:50.348574 | orchestrator | 2026-03-29 04:38:50.348678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:38:50.348690 | orchestrator | 2026-03-29 04:38:50.348698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:38:50.348727 | orchestrator | Sunday 29 March 2026 04:38:23 +0000 (0:00:01.611) 0:00:01.611 ********** 2026-03-29 04:38:50.348735 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:38:50.348744 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:38:50.348751 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:38:50.348758 | orchestrator | 2026-03-29 04:38:50.348765 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:38:50.348773 | orchestrator | Sunday 29 March 2026 04:38:25 +0000 (0:00:01.665) 0:00:03.277 ********** 2026-03-29 04:38:50.348781 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-29 04:38:50.348789 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-29 04:38:50.348796 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-29 04:38:50.348803 | orchestrator | 2026-03-29 04:38:50.348811 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-29 04:38:50.348818 | orchestrator | 2026-03-29 04:38:50.348825 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-29 04:38:50.348832 | orchestrator | Sunday 29 March 2026 04:38:26 +0000 (0:00:01.910) 0:00:05.188 ********** 2026-03-29 04:38:50.348840 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:38:50.348847 | orchestrator | 2026-03-29 04:38:50.348854 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-29 04:38:50.348861 | orchestrator | Sunday 29 March 2026 04:38:29 +0000 (0:00:02.377) 0:00:07.565 ********** 2026-03-29 04:38:50.348868 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-29 04:38:50.348876 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-29 04:38:50.348883 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-29 04:38:50.348890 | orchestrator | 2026-03-29 04:38:50.348897 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-29 04:38:50.348904 | orchestrator | Sunday 29 March 2026 04:38:31 +0000 (0:00:01.933) 0:00:09.498 ********** 2026-03-29 04:38:50.348911 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-29 04:38:50.348919 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-29 04:38:50.348926 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-29 04:38:50.348933 | orchestrator | 2026-03-29 04:38:50.348940 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-29 04:38:50.348947 | orchestrator | Sunday 29 March 2026 04:38:33 +0000 (0:00:02.671) 0:00:12.170 ********** 2026-03-29 04:38:50.348969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:38:50.348980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:38:50.349057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 04:38:50.349070 | orchestrator | 2026-03-29 04:38:50.349078 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-29 04:38:50.349085 | orchestrator | Sunday 29 March 2026 04:38:36 +0000 (0:00:02.186) 0:00:14.356 ********** 2026-03-29 04:38:50.349093 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:38:50.349100 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:38:50.349108 | orchestrator | } 2026-03-29 04:38:50.349115 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:38:50.349122 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:38:50.349130 | orchestrator | } 2026-03-29 04:38:50.349137 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:38:50.349144 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:38:50.349151 | orchestrator | } 2026-03-29 04:38:50.349158 | orchestrator | 2026-03-29 04:38:50.349166 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:38:50.349173 | orchestrator | Sunday 29 March 2026 04:38:37 +0000 (0:00:01.346) 0:00:15.703 ********** 2026-03-29 04:38:50.349180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:38:50.349188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:38:50.349196 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:38:50.349203 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:38:50.349215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 04:38:50.349229 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:38:50.349236 | orchestrator | 2026-03-29 04:38:50.349243 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-29 04:38:50.349251 | orchestrator | Sunday 29 March 2026 04:38:39 +0000 (0:00:01.960) 0:00:17.664 ********** 2026-03-29 04:38:50.349258 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:38:50.349265 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:38:50.349272 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:38:50.349279 | orchestrator | 2026-03-29 04:38:50.349287 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:38:50.349295 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:38:50.349303 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:38:50.349311 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:38:50.349318 | orchestrator | 2026-03-29 04:38:50.349325 | orchestrator | 2026-03-29 04:38:50.349332 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:38:50.349345 | orchestrator | Sunday 29 March 2026 04:38:50 +0000 (0:00:10.876) 0:00:28.540 ********** 2026-03-29 04:38:50.666705 | orchestrator | =============================================================================== 2026-03-29 04:38:50.666809 | orchestrator | memcached : Restart memcached container -------------------------------- 10.88s 2026-03-29 04:38:50.666824 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.67s 2026-03-29 04:38:50.666836 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.38s 2026-03-29 04:38:50.666846 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.19s 2026-03-29 04:38:50.666857 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.96s 2026-03-29 04:38:50.666868 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.93s 2026-03-29 04:38:50.666879 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.91s 2026-03-29 04:38:50.666890 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.67s 2026-03-29 04:38:50.666900 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.35s 2026-03-29 04:38:50.956448 | orchestrator | + osism apply -a upgrade redis 2026-03-29 04:38:53.078823 | orchestrator | 2026-03-29 04:38:53 | INFO  | Task 256539ce-a532-4a55-bbd2-f22da0da9a36 (redis) was prepared for execution. 2026-03-29 04:38:53.078919 | orchestrator | 2026-03-29 04:38:53 | INFO  | It takes a moment until task 256539ce-a532-4a55-bbd2-f22da0da9a36 (redis) has been started and output is visible here. 2026-03-29 04:39:04.260657 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-29 04:39:04.260798 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-29 04:39:04.260846 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-29 04:39:04.260864 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-29 04:39:04.260931 | orchestrator | 2026-03-29 04:39:04.260952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:39:04.260971 | orchestrator | 2026-03-29 04:39:04.261051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:39:04.261071 | orchestrator | Sunday 29 March 2026 04:38:58 +0000 (0:00:00.964) 0:00:00.964 ********** 2026-03-29 04:39:04.261092 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:39:04.261113 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:39:04.261131 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:39:04.261145 | orchestrator | 2026-03-29 04:39:04.261156 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:39:04.261167 | orchestrator | Sunday 29 March 2026 04:38:59 +0000 (0:00:00.876) 0:00:01.841 ********** 2026-03-29 04:39:04.261179 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-29 04:39:04.261190 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-29 04:39:04.261219 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-29 04:39:04.261231 | orchestrator | 2026-03-29 04:39:04.261244 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-29 04:39:04.261256 | orchestrator | 2026-03-29 04:39:04.261268 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-29 04:39:04.261280 | orchestrator | Sunday 29 March 2026 04:38:59 +0000 (0:00:00.747) 0:00:02.588 ********** 2026-03-29 04:39:04.261293 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:39:04.261307 | orchestrator | 2026-03-29 04:39:04.261320 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-29 04:39:04.261332 | orchestrator | Sunday 29 March 2026 04:39:00 +0000 (0:00:01.000) 0:00:03.589 ********** 2026-03-29 04:39:04.261348 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261368 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261396 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261465 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261485 | orchestrator | 2026-03-29 04:39:04.261505 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-29 04:39:04.261523 | orchestrator | Sunday 29 March 2026 04:39:02 +0000 (0:00:01.340) 0:00:04.929 ********** 2026-03-29 04:39:04.261542 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261611 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:04.261669 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906425 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906557 | orchestrator | 2026-03-29 04:39:09.906575 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-29 04:39:09.906588 | orchestrator | Sunday 29 March 2026 04:39:04 +0000 (0:00:02.090) 0:00:07.020 ********** 2026-03-29 04:39:09.906643 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906683 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906769 | orchestrator | 2026-03-29 04:39:09.906781 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-29 04:39:09.906792 | orchestrator | Sunday 29 March 2026 04:39:07 +0000 (0:00:03.542) 0:00:10.563 ********** 2026-03-29 04:39:09.906805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:09.906955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 04:39:32.608912 | orchestrator | 2026-03-29 04:39:32.609107 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-29 04:39:32.609126 | orchestrator | Sunday 29 March 2026 04:39:09 +0000 (0:00:02.107) 0:00:12.671 ********** 2026-03-29 04:39:32.609140 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:39:32.609152 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:39:32.609163 | orchestrator | } 2026-03-29 04:39:32.609175 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:39:32.609186 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:39:32.609198 | orchestrator | } 2026-03-29 04:39:32.609208 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:39:32.609220 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:39:32.609231 | orchestrator | } 2026-03-29 04:39:32.609242 | orchestrator | 2026-03-29 04:39:32.609253 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:39:32.609264 | orchestrator | Sunday 29 March 2026 04:39:10 +0000 (0:00:00.558) 0:00:13.229 ********** 2026-03-29 04:39:32.609277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609333 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-29 04:39:32.609345 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-29 04:39:32.609367 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:39:32.609379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609403 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:39:32.609450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-29 04:39:32.609479 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:39:32.609492 | orchestrator | 2026-03-29 04:39:32.609505 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 04:39:32.609518 | orchestrator | Sunday 29 March 2026 04:39:11 +0000 (0:00:01.006) 0:00:14.236 ********** 2026-03-29 04:39:32.609542 | orchestrator | 2026-03-29 04:39:32.609555 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 04:39:32.609568 | orchestrator | Sunday 29 March 2026 04:39:11 +0000 (0:00:00.078) 0:00:14.314 ********** 2026-03-29 04:39:32.609580 | orchestrator | 2026-03-29 04:39:32.609593 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 04:39:32.609605 | orchestrator | Sunday 29 March 2026 04:39:11 +0000 (0:00:00.069) 0:00:14.384 ********** 2026-03-29 04:39:32.609618 | orchestrator | 2026-03-29 04:39:32.609630 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-29 04:39:32.609643 | orchestrator | Sunday 29 March 2026 04:39:11 +0000 (0:00:00.069) 0:00:14.454 ********** 2026-03-29 04:39:32.609655 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:39:32.609668 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:39:32.609680 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:39:32.609693 | orchestrator | 2026-03-29 04:39:32.609705 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-29 04:39:32.609719 | orchestrator | Sunday 29 March 2026 04:39:21 +0000 (0:00:09.733) 0:00:24.188 ********** 2026-03-29 04:39:32.609738 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:39:32.609757 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:39:32.609776 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:39:32.609795 | orchestrator | 2026-03-29 04:39:32.609813 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:39:32.609832 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:39:32.609852 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:39:32.609871 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:39:32.609890 | orchestrator | 2026-03-29 04:39:32.609909 | orchestrator | 2026-03-29 04:39:32.609966 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:39:32.609987 | orchestrator | Sunday 29 March 2026 04:39:32 +0000 (0:00:10.843) 0:00:35.031 ********** 2026-03-29 04:39:32.610006 | orchestrator | =============================================================================== 2026-03-29 04:39:32.610096 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.84s 2026-03-29 04:39:32.610110 | orchestrator | redis : Restart redis container ----------------------------------------- 9.73s 2026-03-29 04:39:32.610121 | orchestrator | redis : Copying over redis config files --------------------------------- 3.54s 2026-03-29 04:39:32.610134 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.11s 2026-03-29 04:39:32.610164 | orchestrator | redis : Copying over default config.json files -------------------------- 2.09s 2026-03-29 04:39:32.610183 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.34s 2026-03-29 04:39:32.610201 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.01s 2026-03-29 04:39:32.610218 | orchestrator | redis : include_tasks --------------------------------------------------- 1.00s 2026-03-29 04:39:32.610233 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-03-29 04:39:32.610244 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-03-29 04:39:32.610255 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.56s 2026-03-29 04:39:32.610266 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-03-29 04:39:32.880148 | orchestrator | + osism apply -a upgrade mariadb 2026-03-29 04:39:34.923258 | orchestrator | 2026-03-29 04:39:34 | INFO  | Task 14504d8a-6ef6-4eb4-be03-c53454541bd2 (mariadb) was prepared for execution. 2026-03-29 04:39:34.923364 | orchestrator | 2026-03-29 04:39:34 | INFO  | It takes a moment until task 14504d8a-6ef6-4eb4-be03-c53454541bd2 (mariadb) has been started and output is visible here. 2026-03-29 04:40:00.134949 | orchestrator | 2026-03-29 04:40:00.135078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:40:00.135097 | orchestrator | 2026-03-29 04:40:00.135126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:40:00.135137 | orchestrator | Sunday 29 March 2026 04:39:40 +0000 (0:00:01.547) 0:00:01.547 ********** 2026-03-29 04:40:00.135149 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:40:00.135161 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:40:00.135172 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:40:00.135182 | orchestrator | 2026-03-29 04:40:00.135193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:40:00.135205 | orchestrator | Sunday 29 March 2026 04:39:42 +0000 (0:00:01.725) 0:00:03.273 ********** 2026-03-29 04:40:00.135216 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 04:40:00.135227 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 04:40:00.135238 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 04:40:00.135248 | orchestrator | 2026-03-29 04:40:00.135259 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 04:40:00.135270 | orchestrator | 2026-03-29 04:40:00.135281 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 04:40:00.135292 | orchestrator | Sunday 29 March 2026 04:39:44 +0000 (0:00:02.334) 0:00:05.607 ********** 2026-03-29 04:40:00.135303 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:40:00.135314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 04:40:00.135325 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 04:40:00.135335 | orchestrator | 2026-03-29 04:40:00.135346 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 04:40:00.135357 | orchestrator | Sunday 29 March 2026 04:39:46 +0000 (0:00:01.520) 0:00:07.128 ********** 2026-03-29 04:40:00.135368 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:40:00.135380 | orchestrator | 2026-03-29 04:40:00.135394 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-29 04:40:00.135413 | orchestrator | Sunday 29 March 2026 04:39:47 +0000 (0:00:01.672) 0:00:08.800 ********** 2026-03-29 04:40:00.135453 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:00.135547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:00.135577 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:00.135599 | orchestrator | 2026-03-29 04:40:00.135617 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-29 04:40:00.135646 | orchestrator | Sunday 29 March 2026 04:39:51 +0000 (0:00:03.783) 0:00:12.584 ********** 2026-03-29 04:40:00.135666 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:00.135687 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:00.135706 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:40:00.135725 | orchestrator | 2026-03-29 04:40:00.135744 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-29 04:40:00.135762 | orchestrator | Sunday 29 March 2026 04:39:53 +0000 (0:00:01.619) 0:00:14.203 ********** 2026-03-29 04:40:00.135780 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:00.135800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:00.135819 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:40:00.135837 | orchestrator | 2026-03-29 04:40:00.135857 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-29 04:40:00.135868 | orchestrator | Sunday 29 March 2026 04:39:55 +0000 (0:00:02.227) 0:00:16.430 ********** 2026-03-29 04:40:00.135948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:11.250722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:11.250912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:11.250934 | orchestrator | 2026-03-29 04:40:11.250949 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-29 04:40:11.250961 | orchestrator | Sunday 29 March 2026 04:40:00 +0000 (0:00:04.623) 0:00:21.054 ********** 2026-03-29 04:40:11.250972 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:11.250983 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:11.250994 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:40:11.251005 | orchestrator | 2026-03-29 04:40:11.251017 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-29 04:40:11.251043 | orchestrator | Sunday 29 March 2026 04:40:01 +0000 (0:00:01.838) 0:00:22.892 ********** 2026-03-29 04:40:11.251055 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:40:11.251065 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:40:11.251076 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:40:11.251087 | orchestrator | 2026-03-29 04:40:11.251097 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 04:40:11.251108 | orchestrator | Sunday 29 March 2026 04:40:06 +0000 (0:00:04.422) 0:00:27.315 ********** 2026-03-29 04:40:11.251119 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:40:11.251130 | orchestrator | 2026-03-29 04:40:11.251141 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 04:40:11.251152 | orchestrator | Sunday 29 March 2026 04:40:08 +0000 (0:00:01.751) 0:00:29.066 ********** 2026-03-29 04:40:11.251173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:11.251185 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:11.251210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:17.877064 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:17.877170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:17.877203 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:17.877213 | orchestrator | 2026-03-29 04:40:17.877222 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 04:40:17.877232 | orchestrator | Sunday 29 March 2026 04:40:11 +0000 (0:00:03.099) 0:00:32.166 ********** 2026-03-29 04:40:17.877249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:17.877259 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:17.877284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:17.877300 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:17.877313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:17.877322 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:17.877330 | orchestrator | 2026-03-29 04:40:17.877338 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 04:40:17.877347 | orchestrator | Sunday 29 March 2026 04:40:14 +0000 (0:00:03.165) 0:00:35.331 ********** 2026-03-29 04:40:17.877368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:21.548904 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:21.549022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:21.549042 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:21.549055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:21.549087 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:21.549098 | orchestrator | 2026-03-29 04:40:21.549108 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-29 04:40:21.549118 | orchestrator | Sunday 29 March 2026 04:40:17 +0000 (0:00:03.464) 0:00:38.796 ********** 2026-03-29 04:40:21.549151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:21.549166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:21.549198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 04:40:35.808618 | orchestrator | 2026-03-29 04:40:35.808715 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-29 04:40:35.808725 | orchestrator | Sunday 29 March 2026 04:40:21 +0000 (0:00:03.669) 0:00:42.465 ********** 2026-03-29 04:40:35.808733 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:40:35.808740 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:40:35.808747 | orchestrator | } 2026-03-29 04:40:35.808754 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:40:35.808760 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:40:35.808766 | orchestrator | } 2026-03-29 04:40:35.808772 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:40:35.808779 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:40:35.808802 | orchestrator | } 2026-03-29 04:40:35.808865 | orchestrator | 2026-03-29 04:40:35.808873 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:40:35.808879 | orchestrator | Sunday 29 March 2026 04:40:22 +0000 (0:00:01.292) 0:00:43.758 ********** 2026-03-29 04:40:35.808888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:35.808896 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.808929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:35.808945 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.808953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:35.808960 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.808966 | orchestrator | 2026-03-29 04:40:35.808973 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-29 04:40:35.808979 | orchestrator | Sunday 29 March 2026 04:40:26 +0000 (0:00:03.514) 0:00:47.272 ********** 2026-03-29 04:40:35.808985 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.808991 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.808997 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809003 | orchestrator | 2026-03-29 04:40:35.809009 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-29 04:40:35.809015 | orchestrator | Sunday 29 March 2026 04:40:27 +0000 (0:00:01.373) 0:00:48.646 ********** 2026-03-29 04:40:35.809021 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809028 | orchestrator | 2026-03-29 04:40:35.809034 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-29 04:40:35.809040 | orchestrator | Sunday 29 March 2026 04:40:28 +0000 (0:00:01.118) 0:00:49.765 ********** 2026-03-29 04:40:35.809046 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809052 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.809058 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809064 | orchestrator | 2026-03-29 04:40:35.809070 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-29 04:40:35.809076 | orchestrator | Sunday 29 March 2026 04:40:30 +0000 (0:00:01.390) 0:00:51.155 ********** 2026-03-29 04:40:35.809082 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809088 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.809094 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809100 | orchestrator | 2026-03-29 04:40:35.809107 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-29 04:40:35.809113 | orchestrator | Sunday 29 March 2026 04:40:31 +0000 (0:00:01.547) 0:00:52.702 ********** 2026-03-29 04:40:35.809119 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.809137 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809143 | orchestrator | 2026-03-29 04:40:35.809149 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-29 04:40:35.809155 | orchestrator | Sunday 29 March 2026 04:40:33 +0000 (0:00:01.396) 0:00:54.099 ********** 2026-03-29 04:40:35.809161 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809167 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.809173 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809179 | orchestrator | 2026-03-29 04:40:35.809185 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-29 04:40:35.809194 | orchestrator | Sunday 29 March 2026 04:40:34 +0000 (0:00:01.314) 0:00:55.413 ********** 2026-03-29 04:40:35.809201 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:35.809207 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:35.809213 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:35.809219 | orchestrator | 2026-03-29 04:40:35.809229 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-29 04:40:53.138193 | orchestrator | Sunday 29 March 2026 04:40:35 +0000 (0:00:01.306) 0:00:56.720 ********** 2026-03-29 04:40:53.138308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138325 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138337 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138356 | orchestrator | 2026-03-29 04:40:53.138377 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-29 04:40:53.138396 | orchestrator | Sunday 29 March 2026 04:40:37 +0000 (0:00:01.545) 0:00:58.266 ********** 2026-03-29 04:40:53.138415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 04:40:53.138433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 04:40:53.138445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 04:40:53.138456 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 04:40:53.138478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 04:40:53.138489 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 04:40:53.138500 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 04:40:53.138522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 04:40:53.138532 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 04:40:53.138543 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138554 | orchestrator | 2026-03-29 04:40:53.138565 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-29 04:40:53.138576 | orchestrator | Sunday 29 March 2026 04:40:38 +0000 (0:00:01.362) 0:00:59.628 ********** 2026-03-29 04:40:53.138586 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138597 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138608 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138619 | orchestrator | 2026-03-29 04:40:53.138629 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-29 04:40:53.138640 | orchestrator | Sunday 29 March 2026 04:40:40 +0000 (0:00:01.357) 0:01:00.985 ********** 2026-03-29 04:40:53.138651 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138662 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138672 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138683 | orchestrator | 2026-03-29 04:40:53.138694 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-29 04:40:53.138705 | orchestrator | Sunday 29 March 2026 04:40:41 +0000 (0:00:01.310) 0:01:02.296 ********** 2026-03-29 04:40:53.138716 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138728 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138741 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138754 | orchestrator | 2026-03-29 04:40:53.138829 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-29 04:40:53.138846 | orchestrator | Sunday 29 March 2026 04:40:42 +0000 (0:00:01.375) 0:01:03.671 ********** 2026-03-29 04:40:53.138859 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138872 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138885 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138897 | orchestrator | 2026-03-29 04:40:53.138910 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-29 04:40:53.138922 | orchestrator | Sunday 29 March 2026 04:40:44 +0000 (0:00:01.361) 0:01:05.032 ********** 2026-03-29 04:40:53.138934 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.138947 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.138959 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.138971 | orchestrator | 2026-03-29 04:40:53.138984 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-29 04:40:53.138996 | orchestrator | Sunday 29 March 2026 04:40:45 +0000 (0:00:01.357) 0:01:06.390 ********** 2026-03-29 04:40:53.139009 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.139022 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.139035 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.139047 | orchestrator | 2026-03-29 04:40:53.139059 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-29 04:40:53.139072 | orchestrator | Sunday 29 March 2026 04:40:46 +0000 (0:00:01.521) 0:01:07.912 ********** 2026-03-29 04:40:53.139084 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.139096 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.139107 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.139117 | orchestrator | 2026-03-29 04:40:53.139128 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-29 04:40:53.139139 | orchestrator | Sunday 29 March 2026 04:40:48 +0000 (0:00:01.380) 0:01:09.292 ********** 2026-03-29 04:40:53.139149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.139160 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.139171 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:40:53.139181 | orchestrator | 2026-03-29 04:40:53.139192 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-29 04:40:53.139203 | orchestrator | Sunday 29 March 2026 04:40:49 +0000 (0:00:01.350) 0:01:10.642 ********** 2026-03-29 04:40:53.139258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:53.139289 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:40:53.139301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:40:53.139313 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:40:53.139340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:41:09.051180 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051288 | orchestrator | 2026-03-29 04:41:09.051302 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-29 04:41:09.051313 | orchestrator | Sunday 29 March 2026 04:40:53 +0000 (0:00:03.409) 0:01:14.051 ********** 2026-03-29 04:41:09.051322 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051332 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051341 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051349 | orchestrator | 2026-03-29 04:41:09.051358 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-29 04:41:09.051367 | orchestrator | Sunday 29 March 2026 04:40:54 +0000 (0:00:01.626) 0:01:15.677 ********** 2026-03-29 04:41:09.051380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:41:09.051392 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:41:09.051464 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 04:41:09.051484 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051492 | orchestrator | 2026-03-29 04:41:09.051501 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-29 04:41:09.051511 | orchestrator | Sunday 29 March 2026 04:40:58 +0000 (0:00:03.324) 0:01:19.002 ********** 2026-03-29 04:41:09.051519 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051528 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051536 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051545 | orchestrator | 2026-03-29 04:41:09.051553 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-29 04:41:09.051562 | orchestrator | Sunday 29 March 2026 04:40:59 +0000 (0:00:01.652) 0:01:20.654 ********** 2026-03-29 04:41:09.051570 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051579 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051588 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051596 | orchestrator | 2026-03-29 04:41:09.051605 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-29 04:41:09.051614 | orchestrator | Sunday 29 March 2026 04:41:00 +0000 (0:00:01.173) 0:01:21.828 ********** 2026-03-29 04:41:09.051622 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051631 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051640 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051648 | orchestrator | 2026-03-29 04:41:09.051657 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-29 04:41:09.051666 | orchestrator | Sunday 29 March 2026 04:41:02 +0000 (0:00:01.293) 0:01:23.122 ********** 2026-03-29 04:41:09.051686 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051694 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051703 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051712 | orchestrator | 2026-03-29 04:41:09.051720 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-29 04:41:09.051731 | orchestrator | Sunday 29 March 2026 04:41:03 +0000 (0:00:01.653) 0:01:24.775 ********** 2026-03-29 04:41:09.051741 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:41:09.051785 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:41:09.051797 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:41:09.051807 | orchestrator | 2026-03-29 04:41:09.051817 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-29 04:41:09.051828 | orchestrator | Sunday 29 March 2026 04:41:05 +0000 (0:00:01.774) 0:01:26.550 ********** 2026-03-29 04:41:09.051838 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:41:09.051849 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:41:09.051860 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:41:09.051870 | orchestrator | 2026-03-29 04:41:09.051880 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-29 04:41:09.051891 | orchestrator | Sunday 29 March 2026 04:41:07 +0000 (0:00:01.851) 0:01:28.401 ********** 2026-03-29 04:41:09.051901 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:41:09.051911 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:41:09.051921 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:41:09.051931 | orchestrator | 2026-03-29 04:41:09.051942 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-29 04:41:09.051956 | orchestrator | Sunday 29 March 2026 04:41:08 +0000 (0:00:01.374) 0:01:29.776 ********** 2026-03-29 04:41:09.051979 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776239 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776367 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776384 | orchestrator | 2026-03-29 04:43:47.776397 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-29 04:43:47.776410 | orchestrator | Sunday 29 March 2026 04:41:10 +0000 (0:00:01.320) 0:01:31.096 ********** 2026-03-29 04:43:47.776422 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776433 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776444 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776455 | orchestrator | 2026-03-29 04:43:47.776466 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-29 04:43:47.776477 | orchestrator | Sunday 29 March 2026 04:41:12 +0000 (0:00:02.062) 0:01:33.159 ********** 2026-03-29 04:43:47.776488 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776499 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776510 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776571 | orchestrator | 2026-03-29 04:43:47.776584 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-29 04:43:47.776596 | orchestrator | Sunday 29 March 2026 04:41:13 +0000 (0:00:01.355) 0:01:34.514 ********** 2026-03-29 04:43:47.776607 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.776619 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.776630 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.776641 | orchestrator | 2026-03-29 04:43:47.776652 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-29 04:43:47.776663 | orchestrator | Sunday 29 March 2026 04:41:14 +0000 (0:00:01.331) 0:01:35.846 ********** 2026-03-29 04:43:47.776674 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776685 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776695 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776706 | orchestrator | 2026-03-29 04:43:47.776717 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-29 04:43:47.776736 | orchestrator | Sunday 29 March 2026 04:41:18 +0000 (0:00:03.675) 0:01:39.521 ********** 2026-03-29 04:43:47.776755 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776775 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776828 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776849 | orchestrator | 2026-03-29 04:43:47.776869 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-29 04:43:47.776889 | orchestrator | Sunday 29 March 2026 04:41:19 +0000 (0:00:01.400) 0:01:40.922 ********** 2026-03-29 04:43:47.776908 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.776928 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.776948 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.776967 | orchestrator | 2026-03-29 04:43:47.776988 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-29 04:43:47.777008 | orchestrator | Sunday 29 March 2026 04:41:21 +0000 (0:00:01.315) 0:01:42.237 ********** 2026-03-29 04:43:47.777027 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.777048 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.777067 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.777085 | orchestrator | 2026-03-29 04:43:47.777096 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 04:43:47.777107 | orchestrator | Sunday 29 March 2026 04:41:22 +0000 (0:00:01.678) 0:01:43.916 ********** 2026-03-29 04:43:47.777118 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.777129 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.777139 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.777150 | orchestrator | 2026-03-29 04:43:47.777161 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 04:43:47.777171 | orchestrator | Sunday 29 March 2026 04:41:24 +0000 (0:00:01.483) 0:01:45.400 ********** 2026-03-29 04:43:47.777182 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.777193 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.777204 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.777214 | orchestrator | 2026-03-29 04:43:47.777225 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-29 04:43:47.777235 | orchestrator | Sunday 29 March 2026 04:41:25 +0000 (0:00:01.497) 0:01:46.898 ********** 2026-03-29 04:43:47.777246 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:43:47.777256 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:43:47.777267 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:43:47.777277 | orchestrator | 2026-03-29 04:43:47.777288 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-29 04:43:47.777299 | orchestrator | Sunday 29 March 2026 04:41:27 +0000 (0:00:01.528) 0:01:48.426 ********** 2026-03-29 04:43:47.777309 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.777320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.777346 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.777357 | orchestrator | 2026-03-29 04:43:47.777368 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 04:43:47.777379 | orchestrator | 2026-03-29 04:43:47.777390 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 04:43:47.777401 | orchestrator | Sunday 29 March 2026 04:41:29 +0000 (0:00:01.830) 0:01:50.256 ********** 2026-03-29 04:43:47.777412 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:43:47.777422 | orchestrator | 2026-03-29 04:43:47.777433 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 04:43:47.777443 | orchestrator | Sunday 29 March 2026 04:41:55 +0000 (0:00:26.593) 0:02:16.849 ********** 2026-03-29 04:43:47.777454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2026-03-29 04:43:47.777466 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.777477 | orchestrator | 2026-03-29 04:43:47.777487 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 04:43:47.777498 | orchestrator | Sunday 29 March 2026 04:42:04 +0000 (0:00:08.180) 0:02:25.030 ********** 2026-03-29 04:43:47.777509 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.777546 | orchestrator | 2026-03-29 04:43:47.777559 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 04:43:47.777580 | orchestrator | 2026-03-29 04:43:47.777591 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 04:43:47.777602 | orchestrator | Sunday 29 March 2026 04:42:07 +0000 (0:00:03.067) 0:02:28.097 ********** 2026-03-29 04:43:47.777613 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:43:47.777624 | orchestrator | 2026-03-29 04:43:47.777654 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 04:43:47.777666 | orchestrator | Sunday 29 March 2026 04:42:33 +0000 (0:00:26.564) 0:02:54.662 ********** 2026-03-29 04:43:47.777677 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-03-29 04:43:47.777689 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.777700 | orchestrator | 2026-03-29 04:43:47.777710 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 04:43:47.777721 | orchestrator | Sunday 29 March 2026 04:42:42 +0000 (0:00:08.402) 0:03:03.064 ********** 2026-03-29 04:43:47.777732 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.777743 | orchestrator | 2026-03-29 04:43:47.777754 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 04:43:47.777765 | orchestrator | 2026-03-29 04:43:47.777775 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 04:43:47.777786 | orchestrator | Sunday 29 March 2026 04:42:45 +0000 (0:00:03.428) 0:03:06.492 ********** 2026-03-29 04:43:47.777797 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:43:47.777808 | orchestrator | 2026-03-29 04:43:47.777819 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 04:43:47.777835 | orchestrator | Sunday 29 March 2026 04:43:09 +0000 (0:00:24.120) 0:03:30.613 ********** 2026-03-29 04:43:47.777854 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.777873 | orchestrator | 2026-03-29 04:43:47.777891 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 04:43:47.777909 | orchestrator | Sunday 29 March 2026 04:43:14 +0000 (0:00:05.241) 0:03:35.855 ********** 2026-03-29 04:43:47.777928 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-29 04:43:47.777947 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 04:43:47.777965 | orchestrator | mariadb_bootstrap_restart 2026-03-29 04:43:47.777984 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.778003 | orchestrator | 2026-03-29 04:43:47.778105 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 04:43:47.778129 | orchestrator | skipping: no hosts matched 2026-03-29 04:43:47.778149 | orchestrator | 2026-03-29 04:43:47.778168 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 04:43:47.778183 | orchestrator | skipping: no hosts matched 2026-03-29 04:43:47.778193 | orchestrator | 2026-03-29 04:43:47.778204 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 04:43:47.778215 | orchestrator | 2026-03-29 04:43:47.778226 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 04:43:47.778237 | orchestrator | Sunday 29 March 2026 04:43:19 +0000 (0:00:04.203) 0:03:40.058 ********** 2026-03-29 04:43:47.778248 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:43:47.778259 | orchestrator | 2026-03-29 04:43:47.778270 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-29 04:43:47.778280 | orchestrator | Sunday 29 March 2026 04:43:20 +0000 (0:00:01.808) 0:03:41.867 ********** 2026-03-29 04:43:47.778291 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778302 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778313 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.778323 | orchestrator | 2026-03-29 04:43:47.778334 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-29 04:43:47.778345 | orchestrator | Sunday 29 March 2026 04:43:24 +0000 (0:00:03.281) 0:03:45.149 ********** 2026-03-29 04:43:47.778355 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778382 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778409 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:43:47.778427 | orchestrator | 2026-03-29 04:43:47.778443 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-29 04:43:47.778460 | orchestrator | Sunday 29 March 2026 04:43:27 +0000 (0:00:03.237) 0:03:48.386 ********** 2026-03-29 04:43:47.778477 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778494 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778510 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.778559 | orchestrator | 2026-03-29 04:43:47.778578 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-29 04:43:47.778594 | orchestrator | Sunday 29 March 2026 04:43:30 +0000 (0:00:03.253) 0:03:51.639 ********** 2026-03-29 04:43:47.778611 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778628 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778646 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:43:47.778663 | orchestrator | 2026-03-29 04:43:47.778691 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-29 04:43:47.778708 | orchestrator | Sunday 29 March 2026 04:43:34 +0000 (0:00:03.370) 0:03:55.010 ********** 2026-03-29 04:43:47.778728 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.778748 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.778767 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.778781 | orchestrator | 2026-03-29 04:43:47.778792 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-29 04:43:47.778802 | orchestrator | Sunday 29 March 2026 04:43:39 +0000 (0:00:05.716) 0:04:00.727 ********** 2026-03-29 04:43:47.778813 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778824 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.778834 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778845 | orchestrator | 2026-03-29 04:43:47.778856 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-29 04:43:47.778866 | orchestrator | Sunday 29 March 2026 04:43:42 +0000 (0:00:02.992) 0:04:03.719 ********** 2026-03-29 04:43:47.778877 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:43:47.778888 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:43:47.778898 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:43:47.778909 | orchestrator | 2026-03-29 04:43:47.778920 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-29 04:43:47.778935 | orchestrator | Sunday 29 March 2026 04:43:44 +0000 (0:00:01.372) 0:04:05.092 ********** 2026-03-29 04:43:47.778953 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:43:47.778979 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:43:47.778999 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:43:47.779016 | orchestrator | 2026-03-29 04:43:47.779048 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 04:44:08.908661 | orchestrator | Sunday 29 March 2026 04:43:47 +0000 (0:00:03.597) 0:04:08.690 ********** 2026-03-29 04:44:08.908789 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:44:08.908806 | orchestrator | 2026-03-29 04:44:08.908819 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-29 04:44:08.908830 | orchestrator | Sunday 29 March 2026 04:43:49 +0000 (0:00:01.847) 0:04:10.538 ********** 2026-03-29 04:44:08.908841 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:44:08.908854 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:44:08.908865 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:44:08.908876 | orchestrator | 2026-03-29 04:44:08.908887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:44:08.908899 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 04:44:08.908912 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-29 04:44:08.908952 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-29 04:44:08.908964 | orchestrator | 2026-03-29 04:44:08.908975 | orchestrator | 2026-03-29 04:44:08.908986 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:44:08.908997 | orchestrator | Sunday 29 March 2026 04:44:08 +0000 (0:00:18.854) 0:04:29.392 ********** 2026-03-29 04:44:08.909008 | orchestrator | =============================================================================== 2026-03-29 04:44:08.909018 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.28s 2026-03-29 04:44:08.909029 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.82s 2026-03-29 04:44:08.909040 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.85s 2026-03-29 04:44:08.909050 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.70s 2026-03-29 04:44:08.909061 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.72s 2026-03-29 04:44:08.909071 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.62s 2026-03-29 04:44:08.909082 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.42s 2026-03-29 04:44:08.909093 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.78s 2026-03-29 04:44:08.909103 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.68s 2026-03-29 04:44:08.909114 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.67s 2026-03-29 04:44:08.909125 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.60s 2026-03-29 04:44:08.909135 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.51s 2026-03-29 04:44:08.909149 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.46s 2026-03-29 04:44:08.909161 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.41s 2026-03-29 04:44:08.909174 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.37s 2026-03-29 04:44:08.909186 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.32s 2026-03-29 04:44:08.909199 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.28s 2026-03-29 04:44:08.909211 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.25s 2026-03-29 04:44:08.909224 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.24s 2026-03-29 04:44:08.909237 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.17s 2026-03-29 04:44:09.232196 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-29 04:44:11.272622 | orchestrator | 2026-03-29 04:44:11 | INFO  | Task eb5bc5ca-1be3-4fff-b315-21603a087884 (rabbitmq) was prepared for execution. 2026-03-29 04:44:11.273676 | orchestrator | 2026-03-29 04:44:11 | INFO  | It takes a moment until task eb5bc5ca-1be3-4fff-b315-21603a087884 (rabbitmq) has been started and output is visible here. 2026-03-29 04:44:53.213278 | orchestrator | 2026-03-29 04:44:53.213373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:44:53.213383 | orchestrator | 2026-03-29 04:44:53.213390 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:44:53.213396 | orchestrator | Sunday 29 March 2026 04:44:16 +0000 (0:00:01.290) 0:00:01.290 ********** 2026-03-29 04:44:53.213403 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:44:53.213410 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:44:53.213416 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:44:53.213422 | orchestrator | 2026-03-29 04:44:53.213428 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:44:53.213434 | orchestrator | Sunday 29 March 2026 04:44:18 +0000 (0:00:02.009) 0:00:03.300 ********** 2026-03-29 04:44:53.213460 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-29 04:44:53.213586 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-29 04:44:53.213592 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-29 04:44:53.213596 | orchestrator | 2026-03-29 04:44:53.213600 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-29 04:44:53.213604 | orchestrator | 2026-03-29 04:44:53.213608 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 04:44:53.213612 | orchestrator | Sunday 29 March 2026 04:44:20 +0000 (0:00:01.868) 0:00:05.168 ********** 2026-03-29 04:44:53.213617 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:44:53.213622 | orchestrator | 2026-03-29 04:44:53.213626 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 04:44:53.213631 | orchestrator | Sunday 29 March 2026 04:44:22 +0000 (0:00:02.194) 0:00:07.363 ********** 2026-03-29 04:44:53.213634 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:44:53.213638 | orchestrator | 2026-03-29 04:44:53.213642 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-29 04:44:53.213645 | orchestrator | Sunday 29 March 2026 04:44:24 +0000 (0:00:02.298) 0:00:09.661 ********** 2026-03-29 04:44:53.213649 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:44:53.213653 | orchestrator | 2026-03-29 04:44:53.213657 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-29 04:44:53.213660 | orchestrator | Sunday 29 March 2026 04:44:28 +0000 (0:00:03.353) 0:00:13.014 ********** 2026-03-29 04:44:53.213664 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:44:53.213669 | orchestrator | 2026-03-29 04:44:53.213673 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-29 04:44:53.213676 | orchestrator | Sunday 29 March 2026 04:44:37 +0000 (0:00:09.399) 0:00:22.414 ********** 2026-03-29 04:44:53.213680 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 04:44:53.213684 | orchestrator |  "changed": false, 2026-03-29 04:44:53.213688 | orchestrator |  "msg": "All assertions passed" 2026-03-29 04:44:53.213692 | orchestrator | } 2026-03-29 04:44:53.213696 | orchestrator | 2026-03-29 04:44:53.213699 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-29 04:44:53.213703 | orchestrator | Sunday 29 March 2026 04:44:39 +0000 (0:00:01.322) 0:00:23.737 ********** 2026-03-29 04:44:53.213707 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 04:44:53.213710 | orchestrator |  "changed": false, 2026-03-29 04:44:53.213714 | orchestrator |  "msg": "All assertions passed" 2026-03-29 04:44:53.213718 | orchestrator | } 2026-03-29 04:44:53.213722 | orchestrator | 2026-03-29 04:44:53.213725 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 04:44:53.213729 | orchestrator | Sunday 29 March 2026 04:44:40 +0000 (0:00:01.559) 0:00:25.297 ********** 2026-03-29 04:44:53.213733 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:44:53.213737 | orchestrator | 2026-03-29 04:44:53.213741 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 04:44:53.213744 | orchestrator | Sunday 29 March 2026 04:44:42 +0000 (0:00:01.578) 0:00:26.875 ********** 2026-03-29 04:44:53.213748 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:44:53.213752 | orchestrator | 2026-03-29 04:44:53.213755 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-29 04:44:53.213759 | orchestrator | Sunday 29 March 2026 04:44:44 +0000 (0:00:02.202) 0:00:29.078 ********** 2026-03-29 04:44:53.213763 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:44:53.213767 | orchestrator | 2026-03-29 04:44:53.213771 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-29 04:44:53.213774 | orchestrator | Sunday 29 March 2026 04:44:47 +0000 (0:00:02.805) 0:00:31.883 ********** 2026-03-29 04:44:53.213778 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:44:53.213782 | orchestrator | 2026-03-29 04:44:53.213792 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-29 04:44:53.213796 | orchestrator | Sunday 29 March 2026 04:44:49 +0000 (0:00:01.861) 0:00:33.745 ********** 2026-03-29 04:44:53.213828 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:44:53.213835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:44:53.213840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:44:53.213844 | orchestrator | 2026-03-29 04:44:53.213848 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-29 04:44:53.213852 | orchestrator | Sunday 29 March 2026 04:44:50 +0000 (0:00:01.714) 0:00:35.459 ********** 2026-03-29 04:44:53.213857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:44:53.213873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:12.246809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:12.246962 | orchestrator | 2026-03-29 04:45:12.246993 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-29 04:45:12.247014 | orchestrator | Sunday 29 March 2026 04:44:53 +0000 (0:00:02.413) 0:00:37.873 ********** 2026-03-29 04:45:12.247034 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 04:45:12.247053 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 04:45:12.247071 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 04:45:12.247090 | orchestrator | 2026-03-29 04:45:12.247109 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-29 04:45:12.247128 | orchestrator | Sunday 29 March 2026 04:44:55 +0000 (0:00:02.374) 0:00:40.248 ********** 2026-03-29 04:45:12.247149 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 04:45:12.247167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 04:45:12.247210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 04:45:12.247222 | orchestrator | 2026-03-29 04:45:12.247233 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-29 04:45:12.247244 | orchestrator | Sunday 29 March 2026 04:44:58 +0000 (0:00:03.073) 0:00:43.321 ********** 2026-03-29 04:45:12.247255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 04:45:12.247265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 04:45:12.247276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 04:45:12.247287 | orchestrator | 2026-03-29 04:45:12.247297 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-29 04:45:12.247308 | orchestrator | Sunday 29 March 2026 04:45:00 +0000 (0:00:02.327) 0:00:45.649 ********** 2026-03-29 04:45:12.247319 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 04:45:12.247329 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 04:45:12.247340 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 04:45:12.247350 | orchestrator | 2026-03-29 04:45:12.247361 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-29 04:45:12.247371 | orchestrator | Sunday 29 March 2026 04:45:03 +0000 (0:00:02.355) 0:00:48.004 ********** 2026-03-29 04:45:12.247382 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 04:45:12.247407 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 04:45:12.247418 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 04:45:12.247429 | orchestrator | 2026-03-29 04:45:12.247440 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-29 04:45:12.247483 | orchestrator | Sunday 29 March 2026 04:45:05 +0000 (0:00:02.256) 0:00:50.261 ********** 2026-03-29 04:45:12.247495 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 04:45:12.247506 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 04:45:12.247516 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 04:45:12.247527 | orchestrator | 2026-03-29 04:45:12.247537 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 04:45:12.247548 | orchestrator | Sunday 29 March 2026 04:45:08 +0000 (0:00:02.507) 0:00:52.768 ********** 2026-03-29 04:45:12.247559 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:45:12.247570 | orchestrator | 2026-03-29 04:45:12.247601 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-29 04:45:12.247613 | orchestrator | Sunday 29 March 2026 04:45:09 +0000 (0:00:01.692) 0:00:54.461 ********** 2026-03-29 04:45:12.247625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:12.247648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:12.247668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:12.247680 | orchestrator | 2026-03-29 04:45:12.247691 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-29 04:45:12.247702 | orchestrator | Sunday 29 March 2026 04:45:12 +0000 (0:00:02.323) 0:00:56.784 ********** 2026-03-29 04:45:12.247728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632036 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:45:21.632151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632196 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:45:21.632210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632223 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:45:21.632235 | orchestrator | 2026-03-29 04:45:21.632246 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-29 04:45:21.632258 | orchestrator | Sunday 29 March 2026 04:45:13 +0000 (0:00:01.530) 0:00:58.314 ********** 2026-03-29 04:45:21.632284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632297 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:45:21.632327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632349 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:45:21.632361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:45:21.632372 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:45:21.632383 | orchestrator | 2026-03-29 04:45:21.632395 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-29 04:45:21.632406 | orchestrator | Sunday 29 March 2026 04:45:15 +0000 (0:00:01.844) 0:01:00.159 ********** 2026-03-29 04:45:21.632417 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:45:21.632428 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:45:21.632470 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:45:21.632483 | orchestrator | 2026-03-29 04:45:21.632498 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-29 04:45:21.632516 | orchestrator | Sunday 29 March 2026 04:45:19 +0000 (0:00:03.893) 0:01:04.052 ********** 2026-03-29 04:45:21.632536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:45:21.632570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:47:08.993446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 04:47:08.993538 | orchestrator | 2026-03-29 04:47:08.993548 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-29 04:47:08.993555 | orchestrator | Sunday 29 March 2026 04:45:21 +0000 (0:00:02.241) 0:01:06.294 ********** 2026-03-29 04:47:08.993562 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:47:08.993568 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:08.993574 | orchestrator | } 2026-03-29 04:47:08.993580 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:47:08.993585 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:08.993591 | orchestrator | } 2026-03-29 04:47:08.993596 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:47:08.993673 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:08.993684 | orchestrator | } 2026-03-29 04:47:08.993689 | orchestrator | 2026-03-29 04:47:08.993695 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:47:08.993701 | orchestrator | Sunday 29 March 2026 04:45:23 +0000 (0:00:01.409) 0:01:07.703 ********** 2026-03-29 04:47:08.993710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:47:08.993717 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:47:08.993724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:47:08.993742 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:47:08.993762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 04:47:08.993768 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:47:08.993773 | orchestrator | 2026-03-29 04:47:08.993779 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-29 04:47:08.993784 | orchestrator | Sunday 29 March 2026 04:45:24 +0000 (0:00:01.962) 0:01:09.666 ********** 2026-03-29 04:47:08.993790 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:47:08.993796 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:47:08.993801 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:47:08.993806 | orchestrator | 2026-03-29 04:47:08.993812 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 04:47:08.993817 | orchestrator | 2026-03-29 04:47:08.993823 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 04:47:08.993828 | orchestrator | Sunday 29 March 2026 04:45:27 +0000 (0:00:02.043) 0:01:11.710 ********** 2026-03-29 04:47:08.993833 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:47:08.993840 | orchestrator | 2026-03-29 04:47:08.993848 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 04:47:08.993857 | orchestrator | Sunday 29 March 2026 04:45:29 +0000 (0:00:02.090) 0:01:13.800 ********** 2026-03-29 04:47:08.993865 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:47:08.993874 | orchestrator | 2026-03-29 04:47:08.993883 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 04:47:08.993893 | orchestrator | Sunday 29 March 2026 04:45:38 +0000 (0:00:09.313) 0:01:23.114 ********** 2026-03-29 04:47:08.993906 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:47:08.993915 | orchestrator | 2026-03-29 04:47:08.993923 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 04:47:08.993932 | orchestrator | Sunday 29 March 2026 04:45:47 +0000 (0:00:09.108) 0:01:32.222 ********** 2026-03-29 04:47:08.993940 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:47:08.993956 | orchestrator | 2026-03-29 04:47:08.993966 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 04:47:08.993974 | orchestrator | 2026-03-29 04:47:08.993983 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 04:47:08.993992 | orchestrator | Sunday 29 March 2026 04:45:56 +0000 (0:00:09.223) 0:01:41.446 ********** 2026-03-29 04:47:08.994000 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:47:08.994010 | orchestrator | 2026-03-29 04:47:08.994066 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 04:47:08.994076 | orchestrator | Sunday 29 March 2026 04:45:58 +0000 (0:00:01.800) 0:01:43.246 ********** 2026-03-29 04:47:08.994082 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:47:08.994087 | orchestrator | 2026-03-29 04:47:08.994093 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 04:47:08.994098 | orchestrator | Sunday 29 March 2026 04:46:08 +0000 (0:00:10.203) 0:01:53.450 ********** 2026-03-29 04:47:08.994104 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:47:08.994109 | orchestrator | 2026-03-29 04:47:08.994114 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 04:47:08.994120 | orchestrator | Sunday 29 March 2026 04:46:23 +0000 (0:00:15.054) 0:02:08.504 ********** 2026-03-29 04:47:08.994125 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:47:08.994131 | orchestrator | 2026-03-29 04:47:08.994136 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 04:47:08.994148 | orchestrator | 2026-03-29 04:47:08.994154 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 04:47:08.994159 | orchestrator | Sunday 29 March 2026 04:46:33 +0000 (0:00:10.116) 0:02:18.621 ********** 2026-03-29 04:47:08.994165 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:47:08.994170 | orchestrator | 2026-03-29 04:47:08.994176 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 04:47:08.994181 | orchestrator | Sunday 29 March 2026 04:46:35 +0000 (0:00:01.794) 0:02:20.415 ********** 2026-03-29 04:47:08.994186 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:47:08.994192 | orchestrator | 2026-03-29 04:47:08.994197 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 04:47:08.994202 | orchestrator | Sunday 29 March 2026 04:46:44 +0000 (0:00:09.258) 0:02:29.673 ********** 2026-03-29 04:47:08.994208 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:47:08.994213 | orchestrator | 2026-03-29 04:47:08.994218 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 04:47:08.994224 | orchestrator | Sunday 29 March 2026 04:46:59 +0000 (0:00:14.066) 0:02:43.740 ********** 2026-03-29 04:47:08.994229 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:47:08.994234 | orchestrator | 2026-03-29 04:47:08.994240 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-29 04:47:08.994245 | orchestrator | 2026-03-29 04:47:08.994251 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-29 04:47:08.994264 | orchestrator | Sunday 29 March 2026 04:47:08 +0000 (0:00:09.910) 0:02:53.651 ********** 2026-03-29 04:47:14.856209 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:47:14.856348 | orchestrator | 2026-03-29 04:47:14.856434 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-29 04:47:14.856455 | orchestrator | Sunday 29 March 2026 04:47:10 +0000 (0:00:01.320) 0:02:54.972 ********** 2026-03-29 04:47:14.856472 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:47:14.856491 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:47:14.856507 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:47:14.856524 | orchestrator | 2026-03-29 04:47:14.856539 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:47:14.856557 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:47:14.856607 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 04:47:14.856649 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 04:47:14.856666 | orchestrator | 2026-03-29 04:47:14.856682 | orchestrator | 2026-03-29 04:47:14.856698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:47:14.856715 | orchestrator | Sunday 29 March 2026 04:47:14 +0000 (0:00:04.214) 0:02:59.186 ********** 2026-03-29 04:47:14.856732 | orchestrator | =============================================================================== 2026-03-29 04:47:14.856748 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 38.23s 2026-03-29 04:47:14.856765 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.25s 2026-03-29 04:47:14.856782 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 28.78s 2026-03-29 04:47:14.856798 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.40s 2026-03-29 04:47:14.856815 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.68s 2026-03-29 04:47:14.856831 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.21s 2026-03-29 04:47:14.856847 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.89s 2026-03-29 04:47:14.856864 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.35s 2026-03-29 04:47:14.856881 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.07s 2026-03-29 04:47:14.856898 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.81s 2026-03-29 04:47:14.856914 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.51s 2026-03-29 04:47:14.856932 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.41s 2026-03-29 04:47:14.856948 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.37s 2026-03-29 04:47:14.856964 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.36s 2026-03-29 04:47:14.856981 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.33s 2026-03-29 04:47:14.856996 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.32s 2026-03-29 04:47:14.857010 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.30s 2026-03-29 04:47:14.857046 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.26s 2026-03-29 04:47:14.857063 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.24s 2026-03-29 04:47:14.857080 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.20s 2026-03-29 04:47:15.124630 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-29 04:47:17.147026 | orchestrator | 2026-03-29 04:47:17 | INFO  | Task f0d210de-303c-4805-9f80-e1809e90c644 (openvswitch) was prepared for execution. 2026-03-29 04:47:17.147145 | orchestrator | 2026-03-29 04:47:17 | INFO  | It takes a moment until task f0d210de-303c-4805-9f80-e1809e90c644 (openvswitch) has been started and output is visible here. 2026-03-29 04:47:41.622069 | orchestrator | 2026-03-29 04:47:41.622188 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:47:41.622206 | orchestrator | 2026-03-29 04:47:41.622219 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:47:41.622232 | orchestrator | Sunday 29 March 2026 04:47:22 +0000 (0:00:01.613) 0:00:01.613 ********** 2026-03-29 04:47:41.622244 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:47:41.622258 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:47:41.622270 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:47:41.622282 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:47:41.622293 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:47:41.622333 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:47:41.622346 | orchestrator | 2026-03-29 04:47:41.622405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:47:41.622416 | orchestrator | Sunday 29 March 2026 04:47:25 +0000 (0:00:02.467) 0:00:04.080 ********** 2026-03-29 04:47:41.622428 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622439 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622450 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622461 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622473 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622484 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 04:47:41.622495 | orchestrator | 2026-03-29 04:47:41.622504 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-29 04:47:41.622515 | orchestrator | 2026-03-29 04:47:41.622526 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-29 04:47:41.622537 | orchestrator | Sunday 29 March 2026 04:47:27 +0000 (0:00:01.953) 0:00:06.034 ********** 2026-03-29 04:47:41.622550 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:47:41.622562 | orchestrator | 2026-03-29 04:47:41.622574 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 04:47:41.622588 | orchestrator | Sunday 29 March 2026 04:47:30 +0000 (0:00:02.672) 0:00:08.706 ********** 2026-03-29 04:47:41.622600 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-29 04:47:41.622612 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-29 04:47:41.622623 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-29 04:47:41.622635 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-29 04:47:41.622647 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-29 04:47:41.622658 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-29 04:47:41.622670 | orchestrator | 2026-03-29 04:47:41.622681 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 04:47:41.622693 | orchestrator | Sunday 29 March 2026 04:47:32 +0000 (0:00:02.129) 0:00:10.835 ********** 2026-03-29 04:47:41.622703 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-29 04:47:41.622716 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-29 04:47:41.622728 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-29 04:47:41.622739 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-29 04:47:41.622752 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-29 04:47:41.622761 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-29 04:47:41.622768 | orchestrator | 2026-03-29 04:47:41.622776 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 04:47:41.622784 | orchestrator | Sunday 29 March 2026 04:47:34 +0000 (0:00:02.611) 0:00:13.447 ********** 2026-03-29 04:47:41.622792 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-29 04:47:41.622800 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:47:41.622809 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-29 04:47:41.622817 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:47:41.622825 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-29 04:47:41.622833 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:47:41.622841 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-29 04:47:41.622849 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:47:41.622857 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-29 04:47:41.622864 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:47:41.622882 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-29 04:47:41.622889 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:47:41.622895 | orchestrator | 2026-03-29 04:47:41.622903 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-29 04:47:41.622914 | orchestrator | Sunday 29 March 2026 04:47:36 +0000 (0:00:02.192) 0:00:15.640 ********** 2026-03-29 04:47:41.622924 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:47:41.622942 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:47:41.622949 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:47:41.622956 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:47:41.622962 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:47:41.622969 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:47:41.622976 | orchestrator | 2026-03-29 04:47:41.622982 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-29 04:47:41.622989 | orchestrator | Sunday 29 March 2026 04:47:38 +0000 (0:00:01.986) 0:00:17.626 ********** 2026-03-29 04:47:41.623019 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623032 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623047 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:41.623084 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051480 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051487 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051519 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051523 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051528 | orchestrator | 2026-03-29 04:47:44.051533 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-29 04:47:44.051538 | orchestrator | Sunday 29 March 2026 04:47:41 +0000 (0:00:02.612) 0:00:20.238 ********** 2026-03-29 04:47:44.051552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051558 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051562 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051577 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051581 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:44.051589 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717524 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717654 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717703 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717736 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717752 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717768 | orchestrator | 2026-03-29 04:47:49.717784 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-29 04:47:49.717800 | orchestrator | Sunday 29 March 2026 04:47:45 +0000 (0:00:03.589) 0:00:23.828 ********** 2026-03-29 04:47:49.717816 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:47:49.717833 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:47:49.717847 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:47:49.717862 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:47:49.717876 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:47:49.717890 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:47:49.717905 | orchestrator | 2026-03-29 04:47:49.717920 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-29 04:47:49.717955 | orchestrator | Sunday 29 March 2026 04:47:47 +0000 (0:00:02.425) 0:00:26.254 ********** 2026-03-29 04:47:49.717966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:49.717987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:49.718004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:49.718015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:49.718077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:49.718097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 04:47:53.377798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.377915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.377946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.377959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.377978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.378015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 04:47:53.378134 | orchestrator | 2026-03-29 04:47:53.378156 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-29 04:47:53.378175 | orchestrator | Sunday 29 March 2026 04:47:50 +0000 (0:00:03.335) 0:00:29.589 ********** 2026-03-29 04:47:53.378191 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:47:53.378208 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378225 | orchestrator | } 2026-03-29 04:47:53.378241 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:47:53.378257 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378274 | orchestrator | } 2026-03-29 04:47:53.378284 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:47:53.378294 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378304 | orchestrator | } 2026-03-29 04:47:53.378314 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 04:47:53.378324 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378335 | orchestrator | } 2026-03-29 04:47:53.378376 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 04:47:53.378391 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378408 | orchestrator | } 2026-03-29 04:47:53.378425 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 04:47:53.378442 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:47:53.378458 | orchestrator | } 2026-03-29 04:47:53.378475 | orchestrator | 2026-03-29 04:47:53.378490 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:47:53.378507 | orchestrator | Sunday 29 March 2026 04:47:52 +0000 (0:00:01.883) 0:00:31.473 ********** 2026-03-29 04:47:53.378535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:47:53.378556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:47:53.378575 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:47:53.378594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:47:53.378624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:47:53.378657 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:48:23.751916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:48:23.752023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:48:23.752037 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:48:23.752061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:48:23.752069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:48:23.752108 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:48:23.752118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:48:23.752141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-29 04:48:23.752150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:48:23.752158 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:48:23.752166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-29 04:48:23.752178 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:48:23.752187 | orchestrator | 2026-03-29 04:48:23.752196 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752206 | orchestrator | Sunday 29 March 2026 04:47:55 +0000 (0:00:02.568) 0:00:34.041 ********** 2026-03-29 04:48:23.752213 | orchestrator | 2026-03-29 04:48:23.752221 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752228 | orchestrator | Sunday 29 March 2026 04:47:55 +0000 (0:00:00.523) 0:00:34.565 ********** 2026-03-29 04:48:23.752236 | orchestrator | 2026-03-29 04:48:23.752243 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752257 | orchestrator | Sunday 29 March 2026 04:47:56 +0000 (0:00:00.537) 0:00:35.102 ********** 2026-03-29 04:48:23.752265 | orchestrator | 2026-03-29 04:48:23.752273 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752280 | orchestrator | Sunday 29 March 2026 04:47:57 +0000 (0:00:00.707) 0:00:35.810 ********** 2026-03-29 04:48:23.752288 | orchestrator | 2026-03-29 04:48:23.752296 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752303 | orchestrator | Sunday 29 March 2026 04:47:57 +0000 (0:00:00.488) 0:00:36.299 ********** 2026-03-29 04:48:23.752311 | orchestrator | 2026-03-29 04:48:23.752318 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 04:48:23.752326 | orchestrator | Sunday 29 March 2026 04:47:58 +0000 (0:00:00.503) 0:00:36.802 ********** 2026-03-29 04:48:23.752430 | orchestrator | 2026-03-29 04:48:23.752438 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-29 04:48:23.752445 | orchestrator | Sunday 29 March 2026 04:47:59 +0000 (0:00:00.858) 0:00:37.661 ********** 2026-03-29 04:48:23.752452 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:48:23.752460 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:48:23.752468 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:48:23.752477 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:48:23.752484 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:48:23.752492 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:48:23.752500 | orchestrator | 2026-03-29 04:48:23.752507 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-29 04:48:23.752516 | orchestrator | Sunday 29 March 2026 04:48:10 +0000 (0:00:11.542) 0:00:49.204 ********** 2026-03-29 04:48:23.752524 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:48:23.752533 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:48:23.752541 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:48:23.752548 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:48:23.752556 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:48:23.752564 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:48:23.752571 | orchestrator | 2026-03-29 04:48:23.752579 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 04:48:23.752587 | orchestrator | Sunday 29 March 2026 04:48:12 +0000 (0:00:02.225) 0:00:51.429 ********** 2026-03-29 04:48:23.752594 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:48:23.752602 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:48:23.752610 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:48:23.752619 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:48:23.752627 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:48:23.752634 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:48:23.752642 | orchestrator | 2026-03-29 04:48:23.752650 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-29 04:48:23.752664 | orchestrator | Sunday 29 March 2026 04:48:23 +0000 (0:00:10.939) 0:01:02.369 ********** 2026-03-29 04:48:39.737526 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-29 04:48:39.737646 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-29 04:48:39.737674 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-29 04:48:39.737691 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-29 04:48:39.737700 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-29 04:48:39.737710 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-29 04:48:39.737720 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-29 04:48:39.737730 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-29 04:48:39.737762 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-29 04:48:39.737771 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-29 04:48:39.737781 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-29 04:48:39.737790 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-29 04:48:39.737799 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737809 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737818 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737840 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737850 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737859 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 04:48:39.737867 | orchestrator | 2026-03-29 04:48:39.737877 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-29 04:48:39.737887 | orchestrator | Sunday 29 March 2026 04:48:31 +0000 (0:00:07.924) 0:01:10.294 ********** 2026-03-29 04:48:39.737897 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-29 04:48:39.737907 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:48:39.737916 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-29 04:48:39.737923 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:48:39.737931 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-29 04:48:39.737940 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:48:39.737950 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-29 04:48:39.737960 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-29 04:48:39.737969 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-29 04:48:39.737979 | orchestrator | 2026-03-29 04:48:39.737989 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-29 04:48:39.737998 | orchestrator | Sunday 29 March 2026 04:48:34 +0000 (0:00:03.292) 0:01:13.586 ********** 2026-03-29 04:48:39.738008 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-29 04:48:39.738065 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:48:39.738074 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-29 04:48:39.738083 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:48:39.738093 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-29 04:48:39.738103 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:48:39.738112 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-29 04:48:39.738122 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-29 04:48:39.738131 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-29 04:48:39.738141 | orchestrator | 2026-03-29 04:48:39.738151 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:48:39.738161 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:48:39.738171 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:48:39.738180 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 04:48:39.738196 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:48:39.738224 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:48:39.738233 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 04:48:39.738242 | orchestrator | 2026-03-29 04:48:39.738251 | orchestrator | 2026-03-29 04:48:39.738260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:48:39.738268 | orchestrator | Sunday 29 March 2026 04:48:39 +0000 (0:00:04.377) 0:01:17.963 ********** 2026-03-29 04:48:39.738277 | orchestrator | =============================================================================== 2026-03-29 04:48:39.738286 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.54s 2026-03-29 04:48:39.738295 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.94s 2026-03-29 04:48:39.738303 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.92s 2026-03-29 04:48:39.738312 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.38s 2026-03-29 04:48:39.738367 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.62s 2026-03-29 04:48:39.738377 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.59s 2026-03-29 04:48:39.738386 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.34s 2026-03-29 04:48:39.738394 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.29s 2026-03-29 04:48:39.738404 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.67s 2026-03-29 04:48:39.738410 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.61s 2026-03-29 04:48:39.738416 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.61s 2026-03-29 04:48:39.738422 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.57s 2026-03-29 04:48:39.738428 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.47s 2026-03-29 04:48:39.738434 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.43s 2026-03-29 04:48:39.738440 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.22s 2026-03-29 04:48:39.738451 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.19s 2026-03-29 04:48:39.738457 | orchestrator | module-load : Load modules ---------------------------------------------- 2.13s 2026-03-29 04:48:39.738462 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.99s 2026-03-29 04:48:39.738467 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2026-03-29 04:48:39.738472 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.88s 2026-03-29 04:48:40.011287 | orchestrator | + osism apply -a upgrade ovn 2026-03-29 04:48:42.022655 | orchestrator | 2026-03-29 04:48:42 | INFO  | Task 33ededb5-45d2-4d09-a34a-af6bb9c9c70c (ovn) was prepared for execution. 2026-03-29 04:48:42.022753 | orchestrator | 2026-03-29 04:48:42 | INFO  | It takes a moment until task 33ededb5-45d2-4d09-a34a-af6bb9c9c70c (ovn) has been started and output is visible here. 2026-03-29 04:49:03.003702 | orchestrator | 2026-03-29 04:49:03.003798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 04:49:03.003809 | orchestrator | 2026-03-29 04:49:03.003817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 04:49:03.003824 | orchestrator | Sunday 29 March 2026 04:48:47 +0000 (0:00:01.241) 0:00:01.241 ********** 2026-03-29 04:49:03.003850 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:49:03.003858 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:49:03.003865 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:49:03.003872 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:49:03.003878 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:49:03.003885 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:49:03.003892 | orchestrator | 2026-03-29 04:49:03.003899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 04:49:03.003906 | orchestrator | Sunday 29 March 2026 04:48:50 +0000 (0:00:03.352) 0:00:04.593 ********** 2026-03-29 04:49:03.003913 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-29 04:49:03.003920 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-29 04:49:03.003927 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-29 04:49:03.003934 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-29 04:49:03.003940 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-29 04:49:03.003947 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-29 04:49:03.003954 | orchestrator | 2026-03-29 04:49:03.003960 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-29 04:49:03.003967 | orchestrator | 2026-03-29 04:49:03.003974 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-29 04:49:03.003981 | orchestrator | Sunday 29 March 2026 04:48:53 +0000 (0:00:02.249) 0:00:06.843 ********** 2026-03-29 04:49:03.003988 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:49:03.003996 | orchestrator | 2026-03-29 04:49:03.004003 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-29 04:49:03.004009 | orchestrator | Sunday 29 March 2026 04:48:56 +0000 (0:00:03.074) 0:00:09.918 ********** 2026-03-29 04:49:03.004018 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004027 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004041 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004059 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004089 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004097 | orchestrator | 2026-03-29 04:49:03.004104 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-29 04:49:03.004111 | orchestrator | Sunday 29 March 2026 04:48:58 +0000 (0:00:02.337) 0:00:12.255 ********** 2026-03-29 04:49:03.004118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004132 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004139 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004145 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004153 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004159 | orchestrator | 2026-03-29 04:49:03.004166 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-29 04:49:03.004173 | orchestrator | Sunday 29 March 2026 04:49:00 +0000 (0:00:02.446) 0:00:14.702 ********** 2026-03-29 04:49:03.004189 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004196 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:03.004207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619628 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619702 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619708 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619713 | orchestrator | 2026-03-29 04:49:10.619718 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-29 04:49:10.619723 | orchestrator | Sunday 29 March 2026 04:49:02 +0000 (0:00:02.123) 0:00:16.826 ********** 2026-03-29 04:49:10.619727 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619731 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619735 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619765 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619770 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619785 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619792 | orchestrator | 2026-03-29 04:49:10.619798 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-29 04:49:10.619804 | orchestrator | Sunday 29 March 2026 04:49:06 +0000 (0:00:03.102) 0:00:19.929 ********** 2026-03-29 04:49:10.619812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:49:10.619852 | orchestrator | 2026-03-29 04:49:10.619856 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-29 04:49:10.619863 | orchestrator | Sunday 29 March 2026 04:49:08 +0000 (0:00:02.549) 0:00:22.478 ********** 2026-03-29 04:49:10.619867 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:49:10.619872 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619876 | orchestrator | } 2026-03-29 04:49:10.619880 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:49:10.619884 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619888 | orchestrator | } 2026-03-29 04:49:10.619892 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:49:10.619895 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619899 | orchestrator | } 2026-03-29 04:49:10.619903 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 04:49:10.619907 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619910 | orchestrator | } 2026-03-29 04:49:10.619914 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 04:49:10.619918 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619922 | orchestrator | } 2026-03-29 04:49:10.619925 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 04:49:10.619929 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:49:10.619933 | orchestrator | } 2026-03-29 04:49:10.619937 | orchestrator | 2026-03-29 04:49:10.619940 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:49:10.619944 | orchestrator | Sunday 29 March 2026 04:49:10 +0000 (0:00:01.866) 0:00:24.345 ********** 2026-03-29 04:49:10.619954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.738888 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:49:41.739022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.739043 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:49:41.739056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.739095 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:49:41.739107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.739118 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:49:41.739130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.739141 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:49:41.739152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:49:41.739163 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:49:41.739175 | orchestrator | 2026-03-29 04:49:41.739187 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-29 04:49:41.739198 | orchestrator | Sunday 29 March 2026 04:49:12 +0000 (0:00:02.378) 0:00:26.724 ********** 2026-03-29 04:49:41.739209 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:49:41.739221 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:49:41.739232 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:49:41.739242 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:49:41.739253 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:49:41.739278 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:49:41.739325 | orchestrator | 2026-03-29 04:49:41.739339 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-29 04:49:41.739350 | orchestrator | Sunday 29 March 2026 04:49:17 +0000 (0:00:04.399) 0:00:31.123 ********** 2026-03-29 04:49:41.739361 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-29 04:49:41.739373 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-29 04:49:41.739383 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-29 04:49:41.739395 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-29 04:49:41.739408 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-29 04:49:41.739422 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-29 04:49:41.739434 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739447 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739460 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739473 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739485 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739515 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 04:49:41.739529 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739582 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739607 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-29 04:49:41.739620 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739632 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739645 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739657 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739669 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739681 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 04:49:41.739694 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739706 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739718 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739731 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739743 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739755 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 04:49:41.739769 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739781 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739793 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739804 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739815 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739825 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 04:49:41.739836 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 04:49:41.739853 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 04:49:41.739864 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 04:49:41.739875 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 04:49:41.739886 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 04:49:41.739897 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 04:49:41.739914 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-29 04:49:41.739934 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-29 04:49:41.739945 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-29 04:49:41.739956 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-29 04:49:41.739966 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-29 04:49:41.739984 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-29 04:52:29.150580 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 04:52:29.150700 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 04:52:29.150716 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 04:52:29.150730 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 04:52:29.150742 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 04:52:29.150753 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 04:52:29.150764 | orchestrator | 2026-03-29 04:52:29.150776 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150788 | orchestrator | Sunday 29 March 2026 04:49:38 +0000 (0:00:21.444) 0:00:52.567 ********** 2026-03-29 04:52:29.150799 | orchestrator | 2026-03-29 04:52:29.150810 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150821 | orchestrator | Sunday 29 March 2026 04:49:39 +0000 (0:00:00.431) 0:00:52.999 ********** 2026-03-29 04:52:29.150832 | orchestrator | 2026-03-29 04:52:29.150844 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150855 | orchestrator | Sunday 29 March 2026 04:49:39 +0000 (0:00:00.428) 0:00:53.428 ********** 2026-03-29 04:52:29.150866 | orchestrator | 2026-03-29 04:52:29.150876 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150887 | orchestrator | Sunday 29 March 2026 04:49:40 +0000 (0:00:00.449) 0:00:53.877 ********** 2026-03-29 04:52:29.150898 | orchestrator | 2026-03-29 04:52:29.150909 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150920 | orchestrator | Sunday 29 March 2026 04:49:40 +0000 (0:00:00.428) 0:00:54.306 ********** 2026-03-29 04:52:29.150931 | orchestrator | 2026-03-29 04:52:29.150942 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 04:52:29.150953 | orchestrator | Sunday 29 March 2026 04:49:40 +0000 (0:00:00.435) 0:00:54.741 ********** 2026-03-29 04:52:29.150963 | orchestrator | 2026-03-29 04:52:29.150974 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-29 04:52:29.150985 | orchestrator | Sunday 29 March 2026 04:49:41 +0000 (0:00:00.796) 0:00:55.538 ********** 2026-03-29 04:52:29.150996 | orchestrator | 2026-03-29 04:52:29.151007 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-29 04:52:29.151019 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:52:29.151031 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:52:29.151042 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:52:29.151053 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:52:29.151087 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:52:29.151099 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:52:29.151110 | orchestrator | 2026-03-29 04:52:29.151123 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-29 04:52:29.151135 | orchestrator | 2026-03-29 04:52:29.151148 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 04:52:29.151180 | orchestrator | Sunday 29 March 2026 04:51:53 +0000 (0:02:11.926) 0:03:07.464 ********** 2026-03-29 04:52:29.151199 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:52:29.151248 | orchestrator | 2026-03-29 04:52:29.151286 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 04:52:29.151304 | orchestrator | Sunday 29 March 2026 04:51:55 +0000 (0:00:01.656) 0:03:09.120 ********** 2026-03-29 04:52:29.151341 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 04:52:29.151390 | orchestrator | 2026-03-29 04:52:29.151410 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-29 04:52:29.151430 | orchestrator | Sunday 29 March 2026 04:51:57 +0000 (0:00:01.867) 0:03:10.987 ********** 2026-03-29 04:52:29.151452 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.151473 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.151489 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.151506 | orchestrator | 2026-03-29 04:52:29.151523 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-29 04:52:29.151539 | orchestrator | Sunday 29 March 2026 04:51:59 +0000 (0:00:01.901) 0:03:12.888 ********** 2026-03-29 04:52:29.151556 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.151574 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.151591 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.151609 | orchestrator | 2026-03-29 04:52:29.151627 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-29 04:52:29.151645 | orchestrator | Sunday 29 March 2026 04:52:00 +0000 (0:00:01.404) 0:03:14.293 ********** 2026-03-29 04:52:29.151663 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.151681 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.151700 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.151719 | orchestrator | 2026-03-29 04:52:29.151738 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-29 04:52:29.151757 | orchestrator | Sunday 29 March 2026 04:52:01 +0000 (0:00:01.314) 0:03:15.608 ********** 2026-03-29 04:52:29.151776 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.151795 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.151814 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.151834 | orchestrator | 2026-03-29 04:52:29.151854 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-29 04:52:29.151875 | orchestrator | Sunday 29 March 2026 04:52:03 +0000 (0:00:01.584) 0:03:17.193 ********** 2026-03-29 04:52:29.151894 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.151937 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.151958 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.151975 | orchestrator | 2026-03-29 04:52:29.151993 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-29 04:52:29.152010 | orchestrator | Sunday 29 March 2026 04:52:04 +0000 (0:00:01.339) 0:03:18.532 ********** 2026-03-29 04:52:29.152028 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:52:29.152046 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:52:29.152065 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:52:29.152082 | orchestrator | 2026-03-29 04:52:29.152101 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-29 04:52:29.152119 | orchestrator | Sunday 29 March 2026 04:52:06 +0000 (0:00:01.381) 0:03:19.913 ********** 2026-03-29 04:52:29.152136 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152152 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152163 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152190 | orchestrator | 2026-03-29 04:52:29.152202 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-29 04:52:29.152237 | orchestrator | Sunday 29 March 2026 04:52:08 +0000 (0:00:02.012) 0:03:21.926 ********** 2026-03-29 04:52:29.152250 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152261 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152272 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152282 | orchestrator | 2026-03-29 04:52:29.152293 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-29 04:52:29.152304 | orchestrator | Sunday 29 March 2026 04:52:09 +0000 (0:00:01.362) 0:03:23.288 ********** 2026-03-29 04:52:29.152315 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152325 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152336 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152347 | orchestrator | 2026-03-29 04:52:29.152358 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-29 04:52:29.152368 | orchestrator | Sunday 29 March 2026 04:52:11 +0000 (0:00:01.826) 0:03:25.115 ********** 2026-03-29 04:52:29.152379 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152390 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152400 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152411 | orchestrator | 2026-03-29 04:52:29.152422 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-29 04:52:29.152433 | orchestrator | Sunday 29 March 2026 04:52:12 +0000 (0:00:01.314) 0:03:26.429 ********** 2026-03-29 04:52:29.152443 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:52:29.152454 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:52:29.152465 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:52:29.152475 | orchestrator | 2026-03-29 04:52:29.152486 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-29 04:52:29.152497 | orchestrator | Sunday 29 March 2026 04:52:13 +0000 (0:00:01.356) 0:03:27.785 ********** 2026-03-29 04:52:29.152508 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:52:29.152519 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:52:29.152530 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:52:29.152540 | orchestrator | 2026-03-29 04:52:29.152551 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-29 04:52:29.152561 | orchestrator | Sunday 29 March 2026 04:52:15 +0000 (0:00:01.546) 0:03:29.332 ********** 2026-03-29 04:52:29.152572 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152583 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152593 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152604 | orchestrator | 2026-03-29 04:52:29.152615 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-29 04:52:29.152625 | orchestrator | Sunday 29 March 2026 04:52:17 +0000 (0:00:01.744) 0:03:31.077 ********** 2026-03-29 04:52:29.152636 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152647 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152657 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152668 | orchestrator | 2026-03-29 04:52:29.152679 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-29 04:52:29.152690 | orchestrator | Sunday 29 March 2026 04:52:18 +0000 (0:00:01.382) 0:03:32.459 ********** 2026-03-29 04:52:29.152700 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152711 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152721 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152732 | orchestrator | 2026-03-29 04:52:29.152753 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-29 04:52:29.152764 | orchestrator | Sunday 29 March 2026 04:52:20 +0000 (0:00:02.057) 0:03:34.516 ********** 2026-03-29 04:52:29.152775 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:52:29.152785 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:52:29.152796 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:52:29.152806 | orchestrator | 2026-03-29 04:52:29.152817 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-29 04:52:29.152835 | orchestrator | Sunday 29 March 2026 04:52:22 +0000 (0:00:01.406) 0:03:35.923 ********** 2026-03-29 04:52:29.152846 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:52:29.152857 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:52:29.152868 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:52:29.152878 | orchestrator | 2026-03-29 04:52:29.152889 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 04:52:29.152902 | orchestrator | Sunday 29 March 2026 04:52:23 +0000 (0:00:01.302) 0:03:37.226 ********** 2026-03-29 04:52:29.152920 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:52:29.152938 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:52:29.152957 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:52:29.152975 | orchestrator | 2026-03-29 04:52:29.152993 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 04:52:29.153011 | orchestrator | Sunday 29 March 2026 04:52:25 +0000 (0:00:01.674) 0:03:38.901 ********** 2026-03-29 04:52:29.153037 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563730 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563841 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563869 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563895 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563925 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:35.563965 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:35.563986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.563996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:35.564007 | orchestrator | 2026-03-29 04:52:35.564018 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 04:52:35.564029 | orchestrator | Sunday 29 March 2026 04:52:29 +0000 (0:00:04.075) 0:03:42.976 ********** 2026-03-29 04:52:35.564040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.564064 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.564075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.564086 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:35.564103 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089319 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089457 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:50.089530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:50.089585 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:50.089607 | orchestrator | 2026-03-29 04:52:50.089619 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-29 04:52:50.089631 | orchestrator | Sunday 29 March 2026 04:52:35 +0000 (0:00:06.407) 0:03:49.384 ********** 2026-03-29 04:52:50.089641 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-29 04:52:50.089651 | orchestrator | 2026-03-29 04:52:50.089661 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-29 04:52:50.089671 | orchestrator | Sunday 29 March 2026 04:52:37 +0000 (0:00:01.908) 0:03:51.293 ********** 2026-03-29 04:52:50.089681 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:52:50.089692 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:52:50.089717 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:52:50.089728 | orchestrator | 2026-03-29 04:52:50.089738 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-29 04:52:50.089747 | orchestrator | Sunday 29 March 2026 04:52:39 +0000 (0:00:01.764) 0:03:53.057 ********** 2026-03-29 04:52:50.089757 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:52:50.089769 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:52:50.089780 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:52:50.089793 | orchestrator | 2026-03-29 04:52:50.089815 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-29 04:52:50.089839 | orchestrator | Sunday 29 March 2026 04:52:41 +0000 (0:00:02.537) 0:03:55.595 ********** 2026-03-29 04:52:50.089855 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:52:50.089870 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:52:50.089886 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:52:50.089902 | orchestrator | 2026-03-29 04:52:50.089932 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-29 04:52:50.089948 | orchestrator | Sunday 29 March 2026 04:52:44 +0000 (0:00:02.805) 0:03:58.400 ********** 2026-03-29 04:52:50.089965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.089985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.090010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.090106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.090127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.090154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:50.090178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:54.573825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.573914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:54.573924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.573945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:52:54.573952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.573960 | orchestrator | 2026-03-29 04:52:54.573968 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-29 04:52:54.573976 | orchestrator | Sunday 29 March 2026 04:52:50 +0000 (0:00:05.504) 0:04:03.904 ********** 2026-03-29 04:52:54.573984 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:52:54.573992 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:52:54.573999 | orchestrator | } 2026-03-29 04:52:54.574005 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:52:54.574058 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:52:54.574067 | orchestrator | } 2026-03-29 04:52:54.574073 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:52:54.574080 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:52:54.574087 | orchestrator | } 2026-03-29 04:52:54.574094 | orchestrator | 2026-03-29 04:52:54.574101 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-29 04:52:54.574108 | orchestrator | Sunday 29 March 2026 04:52:51 +0000 (0:00:01.375) 0:04:05.280 ********** 2026-03-29 04:52:54.574115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 04:52:54.574355 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 04:54:24.240706 | orchestrator | 2026-03-29 04:54:24.240835 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-29 04:54:24.240847 | orchestrator | Sunday 29 March 2026 04:52:54 +0000 (0:00:03.116) 0:04:08.397 ********** 2026-03-29 04:54:24.240856 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-29 04:54:24.240865 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-29 04:54:24.240872 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-29 04:54:24.240879 | orchestrator | 2026-03-29 04:54:24.240887 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-29 04:54:24.240895 | orchestrator | Sunday 29 March 2026 04:52:56 +0000 (0:00:02.235) 0:04:10.633 ********** 2026-03-29 04:54:24.240902 | orchestrator | changed: [testbed-node-0] => { 2026-03-29 04:54:24.240910 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:54:24.240918 | orchestrator | } 2026-03-29 04:54:24.240925 | orchestrator | changed: [testbed-node-1] => { 2026-03-29 04:54:24.240932 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:54:24.240939 | orchestrator | } 2026-03-29 04:54:24.240946 | orchestrator | changed: [testbed-node-2] => { 2026-03-29 04:54:24.240952 | orchestrator |  "msg": "Notifying handlers" 2026-03-29 04:54:24.240959 | orchestrator | } 2026-03-29 04:54:24.240966 | orchestrator | 2026-03-29 04:54:24.240973 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 04:54:24.240980 | orchestrator | Sunday 29 March 2026 04:52:58 +0000 (0:00:01.387) 0:04:12.021 ********** 2026-03-29 04:54:24.240987 | orchestrator | 2026-03-29 04:54:24.240994 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 04:54:24.241000 | orchestrator | Sunday 29 March 2026 04:52:58 +0000 (0:00:00.434) 0:04:12.455 ********** 2026-03-29 04:54:24.241007 | orchestrator | 2026-03-29 04:54:24.241030 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 04:54:24.241037 | orchestrator | Sunday 29 March 2026 04:52:59 +0000 (0:00:00.447) 0:04:12.902 ********** 2026-03-29 04:54:24.241044 | orchestrator | 2026-03-29 04:54:24.241050 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 04:54:24.241057 | orchestrator | Sunday 29 March 2026 04:53:00 +0000 (0:00:00.981) 0:04:13.884 ********** 2026-03-29 04:54:24.241064 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:54:24.241071 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:54:24.241077 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:54:24.241084 | orchestrator | 2026-03-29 04:54:24.241091 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 04:54:24.241122 | orchestrator | Sunday 29 March 2026 04:53:15 +0000 (0:00:15.784) 0:04:29.668 ********** 2026-03-29 04:54:24.241130 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:54:24.241136 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:54:24.241143 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:54:24.241149 | orchestrator | 2026-03-29 04:54:24.241156 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-29 04:54:24.241163 | orchestrator | Sunday 29 March 2026 04:53:31 +0000 (0:00:15.900) 0:04:45.569 ********** 2026-03-29 04:54:24.241170 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-29 04:54:24.241176 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-29 04:54:24.241183 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-29 04:54:24.241217 | orchestrator | 2026-03-29 04:54:24.241225 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 04:54:24.241233 | orchestrator | Sunday 29 March 2026 04:53:47 +0000 (0:00:15.425) 0:05:00.994 ********** 2026-03-29 04:54:24.241240 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:54:24.241247 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:54:24.241255 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:54:24.241263 | orchestrator | 2026-03-29 04:54:24.241271 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 04:54:24.241279 | orchestrator | Sunday 29 March 2026 04:54:03 +0000 (0:00:16.714) 0:05:17.709 ********** 2026-03-29 04:54:24.241286 | orchestrator | Pausing for 5 seconds 2026-03-29 04:54:24.241293 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:54:24.241300 | orchestrator | 2026-03-29 04:54:24.241307 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 04:54:24.241313 | orchestrator | Sunday 29 March 2026 04:54:10 +0000 (0:00:06.195) 0:05:23.904 ********** 2026-03-29 04:54:24.241320 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:54:24.241326 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:54:24.241333 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:54:24.241340 | orchestrator | 2026-03-29 04:54:24.241346 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 04:54:24.241353 | orchestrator | Sunday 29 March 2026 04:54:11 +0000 (0:00:01.818) 0:05:25.723 ********** 2026-03-29 04:54:24.241359 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:54:24.241366 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:54:24.241373 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:54:24.241379 | orchestrator | 2026-03-29 04:54:24.241386 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 04:54:24.241392 | orchestrator | Sunday 29 March 2026 04:54:13 +0000 (0:00:01.794) 0:05:27.518 ********** 2026-03-29 04:54:24.241399 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:54:24.241406 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:54:24.241412 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:54:24.241419 | orchestrator | 2026-03-29 04:54:24.241425 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 04:54:24.241432 | orchestrator | Sunday 29 March 2026 04:54:15 +0000 (0:00:01.875) 0:05:29.394 ********** 2026-03-29 04:54:24.241438 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:54:24.241445 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:54:24.241452 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:54:24.241458 | orchestrator | 2026-03-29 04:54:24.241465 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 04:54:24.241471 | orchestrator | Sunday 29 March 2026 04:54:17 +0000 (0:00:01.876) 0:05:31.271 ********** 2026-03-29 04:54:24.241478 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:54:24.241485 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:54:24.241495 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:54:24.241505 | orchestrator | 2026-03-29 04:54:24.241517 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 04:54:24.241549 | orchestrator | Sunday 29 March 2026 04:54:19 +0000 (0:00:01.820) 0:05:33.092 ********** 2026-03-29 04:54:24.241571 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:54:24.241583 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:54:24.241594 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:54:24.241606 | orchestrator | 2026-03-29 04:54:24.241617 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-29 04:54:24.241627 | orchestrator | Sunday 29 March 2026 04:54:21 +0000 (0:00:01.811) 0:05:34.903 ********** 2026-03-29 04:54:24.241634 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-29 04:54:24.241641 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-29 04:54:24.241647 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-29 04:54:24.241654 | orchestrator | 2026-03-29 04:54:24.241660 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 04:54:24.241669 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 04:54:24.241678 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 04:54:24.241685 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 04:54:24.241691 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:54:24.241704 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:54:24.241711 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 04:54:24.241717 | orchestrator | 2026-03-29 04:54:24.241724 | orchestrator | 2026-03-29 04:54:24.241731 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 04:54:24.241737 | orchestrator | Sunday 29 March 2026 04:54:23 +0000 (0:00:02.819) 0:05:37.723 ********** 2026-03-29 04:54:24.241744 | orchestrator | =============================================================================== 2026-03-29 04:54:24.241750 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.93s 2026-03-29 04:54:24.241757 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.44s 2026-03-29 04:54:24.241764 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.72s 2026-03-29 04:54:24.241770 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.90s 2026-03-29 04:54:24.241777 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.78s 2026-03-29 04:54:24.241783 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.43s 2026-03-29 04:54:24.241790 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.41s 2026-03-29 04:54:24.241796 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.20s 2026-03-29 04:54:24.241803 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.50s 2026-03-29 04:54:24.241809 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.40s 2026-03-29 04:54:24.241816 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.08s 2026-03-29 04:54:24.241822 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.35s 2026-03-29 04:54:24.241831 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.12s 2026-03-29 04:54:24.241842 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.10s 2026-03-29 04:54:24.241853 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.07s 2026-03-29 04:54:24.241864 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 2.97s 2026-03-29 04:54:24.241882 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.82s 2026-03-29 04:54:24.241892 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.81s 2026-03-29 04:54:24.241902 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.55s 2026-03-29 04:54:24.241914 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.54s 2026-03-29 04:54:24.515718 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-29 04:54:24.515868 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 04:54:24.515886 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-29 04:54:24.524062 | orchestrator | + set -e 2026-03-29 04:54:24.524125 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 04:54:24.524138 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 04:54:24.524151 | orchestrator | ++ INTERACTIVE=false 2026-03-29 04:54:24.524162 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 04:54:24.524173 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 04:54:24.524184 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-29 04:54:26.520341 | orchestrator | 2026-03-29 04:54:26 | INFO  | Task 74b56e06-6bd0-4b06-b5e0-f33327107385 (ceph-rolling_update) was prepared for execution. 2026-03-29 04:54:26.520502 | orchestrator | 2026-03-29 04:54:26 | INFO  | It takes a moment until task 74b56e06-6bd0-4b06-b5e0-f33327107385 (ceph-rolling_update) has been started and output is visible here. 2026-03-29 04:55:48.692926 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 04:55:48.693065 | orchestrator | 2.16.14 2026-03-29 04:55:48.693094 | orchestrator | 2026-03-29 04:55:48.693115 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-29 04:55:48.693136 | orchestrator | 2026-03-29 04:55:48.693156 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-29 04:55:48.693206 | orchestrator | Sunday 29 March 2026 04:54:34 +0000 (0:00:02.024) 0:00:02.024 ********** 2026-03-29 04:55:48.693227 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-29 04:55:48.693248 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-29 04:55:48.693268 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-29 04:55:48.693288 | orchestrator | skipping: [localhost] 2026-03-29 04:55:48.693306 | orchestrator | 2026-03-29 04:55:48.693326 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-29 04:55:48.693345 | orchestrator | 2026-03-29 04:55:48.693364 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-29 04:55:48.693383 | orchestrator | Sunday 29 March 2026 04:54:37 +0000 (0:00:02.164) 0:00:04.189 ********** 2026-03-29 04:55:48.693401 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 04:55:48.693420 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693439 | orchestrator | } 2026-03-29 04:55:48.693460 | orchestrator | ok: [testbed-node-1] => { 2026-03-29 04:55:48.693481 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693501 | orchestrator | } 2026-03-29 04:55:48.693521 | orchestrator | ok: [testbed-node-2] => { 2026-03-29 04:55:48.693541 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693561 | orchestrator | } 2026-03-29 04:55:48.693579 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 04:55:48.693598 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693610 | orchestrator | } 2026-03-29 04:55:48.693622 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 04:55:48.693642 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693661 | orchestrator | } 2026-03-29 04:55:48.693679 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 04:55:48.693697 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693717 | orchestrator | } 2026-03-29 04:55:48.693768 | orchestrator | ok: [testbed-manager] => { 2026-03-29 04:55:48.693788 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 04:55:48.693807 | orchestrator | } 2026-03-29 04:55:48.693819 | orchestrator | 2026-03-29 04:55:48.693830 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-29 04:55:48.693841 | orchestrator | Sunday 29 March 2026 04:54:41 +0000 (0:00:04.560) 0:00:08.749 ********** 2026-03-29 04:55:48.693852 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:55:48.693863 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:55:48.693873 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:55:48.693884 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:55:48.693895 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:55:48.693906 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:55:48.693916 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.693927 | orchestrator | 2026-03-29 04:55:48.693938 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-29 04:55:48.693949 | orchestrator | Sunday 29 March 2026 04:54:47 +0000 (0:00:05.310) 0:00:14.060 ********** 2026-03-29 04:55:48.693960 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 04:55:48.693971 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 04:55:48.693982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:55:48.693993 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 04:55:48.694003 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 04:55:48.694014 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 04:55:48.694082 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 04:55:48.694093 | orchestrator | 2026-03-29 04:55:48.694104 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-29 04:55:48.694115 | orchestrator | Sunday 29 March 2026 04:55:19 +0000 (0:00:32.239) 0:00:46.299 ********** 2026-03-29 04:55:48.694126 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694137 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694148 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694158 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.694169 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.694209 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.694221 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.694232 | orchestrator | 2026-03-29 04:55:48.694242 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 04:55:48.694253 | orchestrator | Sunday 29 March 2026 04:55:21 +0000 (0:00:02.230) 0:00:48.530 ********** 2026-03-29 04:55:48.694265 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 04:55:48.694278 | orchestrator | 2026-03-29 04:55:48.694289 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 04:55:48.694299 | orchestrator | Sunday 29 March 2026 04:55:23 +0000 (0:00:02.345) 0:00:50.875 ********** 2026-03-29 04:55:48.694310 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694321 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694332 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694343 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.694353 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.694364 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.694375 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.694385 | orchestrator | 2026-03-29 04:55:48.694417 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 04:55:48.694429 | orchestrator | Sunday 29 March 2026 04:55:25 +0000 (0:00:02.159) 0:00:53.035 ********** 2026-03-29 04:55:48.694440 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694461 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694472 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694483 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.694493 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.694503 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.694514 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.694525 | orchestrator | 2026-03-29 04:55:48.694536 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 04:55:48.694546 | orchestrator | Sunday 29 March 2026 04:55:27 +0000 (0:00:01.830) 0:00:54.866 ********** 2026-03-29 04:55:48.694557 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694568 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694578 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694679 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.694701 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.694712 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.694723 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.694733 | orchestrator | 2026-03-29 04:55:48.694744 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 04:55:48.694755 | orchestrator | Sunday 29 March 2026 04:55:30 +0000 (0:00:02.243) 0:00:57.109 ********** 2026-03-29 04:55:48.694766 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694777 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694787 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694799 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.694817 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.694844 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.694864 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.694882 | orchestrator | 2026-03-29 04:55:48.694900 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 04:55:48.694924 | orchestrator | Sunday 29 March 2026 04:55:31 +0000 (0:00:01.893) 0:00:59.003 ********** 2026-03-29 04:55:48.694942 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.694961 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.694979 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.694998 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.695017 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.695035 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.695053 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.695064 | orchestrator | 2026-03-29 04:55:48.695074 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 04:55:48.695086 | orchestrator | Sunday 29 March 2026 04:55:33 +0000 (0:00:01.965) 0:01:00.969 ********** 2026-03-29 04:55:48.695096 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.695107 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.695118 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.695128 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.695139 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.695152 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.695249 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.695276 | orchestrator | 2026-03-29 04:55:48.695295 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 04:55:48.695314 | orchestrator | Sunday 29 March 2026 04:55:35 +0000 (0:00:01.986) 0:01:02.956 ********** 2026-03-29 04:55:48.695332 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:55:48.695350 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:55:48.695369 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:55:48.695388 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:55:48.695407 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:55:48.695424 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:55:48.695442 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:55:48.695460 | orchestrator | 2026-03-29 04:55:48.695478 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 04:55:48.695496 | orchestrator | Sunday 29 March 2026 04:55:38 +0000 (0:00:02.359) 0:01:05.315 ********** 2026-03-29 04:55:48.695514 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.695553 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.695573 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.695592 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.695609 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.695627 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.695642 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.695659 | orchestrator | 2026-03-29 04:55:48.695677 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 04:55:48.695696 | orchestrator | Sunday 29 March 2026 04:55:40 +0000 (0:00:02.040) 0:01:07.356 ********** 2026-03-29 04:55:48.695714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:55:48.695732 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 04:55:48.695749 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 04:55:48.695767 | orchestrator | 2026-03-29 04:55:48.695786 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 04:55:48.695803 | orchestrator | Sunday 29 March 2026 04:55:41 +0000 (0:00:01.611) 0:01:08.967 ********** 2026-03-29 04:55:48.695821 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:55:48.695839 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:55:48.695858 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:55:48.695876 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:55:48.695893 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:55:48.695912 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:55:48.695930 | orchestrator | ok: [testbed-manager] 2026-03-29 04:55:48.695949 | orchestrator | 2026-03-29 04:55:48.695967 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 04:55:48.695985 | orchestrator | Sunday 29 March 2026 04:55:43 +0000 (0:00:02.048) 0:01:11.016 ********** 2026-03-29 04:55:48.696003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:55:48.696023 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 04:55:48.696040 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 04:55:48.696058 | orchestrator | 2026-03-29 04:55:48.696073 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 04:55:48.696084 | orchestrator | Sunday 29 March 2026 04:55:47 +0000 (0:00:03.289) 0:01:14.306 ********** 2026-03-29 04:55:48.696116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 04:56:10.102083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 04:56:10.102271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 04:56:10.102291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.102303 | orchestrator | 2026-03-29 04:56:10.102316 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 04:56:10.102328 | orchestrator | Sunday 29 March 2026 04:55:48 +0000 (0:00:01.419) 0:01:15.725 ********** 2026-03-29 04:56:10.102341 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102355 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102367 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102378 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.102389 | orchestrator | 2026-03-29 04:56:10.102400 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 04:56:10.102448 | orchestrator | Sunday 29 March 2026 04:55:50 +0000 (0:00:01.832) 0:01:17.558 ********** 2026-03-29 04:56:10.102462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102477 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102488 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:10.102499 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.102510 | orchestrator | 2026-03-29 04:56:10.102521 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 04:56:10.102532 | orchestrator | Sunday 29 March 2026 04:55:51 +0000 (0:00:01.156) 0:01:18.714 ********** 2026-03-29 04:56:10.102546 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '76a3923fe123', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 04:55:44.623364', 'end': '2026-03-29 04:55:44.670612', 'delta': '0:00:00.047248', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['76a3923fe123'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 04:56:10.102581 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a6db66d8015c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 04:55:45.458337', 'end': '2026-03-29 04:55:45.513082', 'delta': '0:00:00.054745', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6db66d8015c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 04:56:10.102596 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5a2b09aac491', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 04:55:46.024234', 'end': '2026-03-29 04:55:46.073018', 'delta': '0:00:00.048784', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5a2b09aac491'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 04:56:10.102618 | orchestrator | 2026-03-29 04:56:10.102631 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 04:56:10.102645 | orchestrator | Sunday 29 March 2026 04:55:52 +0000 (0:00:01.176) 0:01:19.891 ********** 2026-03-29 04:56:10.102658 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:10.102671 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:10.102684 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:10.102697 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:10.102714 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:10.102728 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:10.102741 | orchestrator | ok: [testbed-manager] 2026-03-29 04:56:10.102753 | orchestrator | 2026-03-29 04:56:10.102765 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 04:56:10.102779 | orchestrator | Sunday 29 March 2026 04:55:54 +0000 (0:00:02.031) 0:01:21.923 ********** 2026-03-29 04:56:10.102792 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.102805 | orchestrator | 2026-03-29 04:56:10.102817 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 04:56:10.102829 | orchestrator | Sunday 29 March 2026 04:55:56 +0000 (0:00:01.266) 0:01:23.189 ********** 2026-03-29 04:56:10.102842 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:10.102854 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:10.102867 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:10.102879 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:10.102891 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:10.102902 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:10.102915 | orchestrator | ok: [testbed-manager] 2026-03-29 04:56:10.102927 | orchestrator | 2026-03-29 04:56:10.102940 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 04:56:10.103035 | orchestrator | Sunday 29 March 2026 04:55:58 +0000 (0:00:02.037) 0:01:25.227 ********** 2026-03-29 04:56:10.103048 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:10.103059 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103070 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103080 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103091 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 04:56:10.103123 | orchestrator | 2026-03-29 04:56:10.103134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 04:56:10.103144 | orchestrator | Sunday 29 March 2026 04:56:01 +0000 (0:00:03.236) 0:01:28.464 ********** 2026-03-29 04:56:10.103155 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:10.103166 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:10.103202 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:10.103215 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:10.103226 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:10.103236 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:10.103247 | orchestrator | ok: [testbed-manager] 2026-03-29 04:56:10.103258 | orchestrator | 2026-03-29 04:56:10.103269 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 04:56:10.103280 | orchestrator | Sunday 29 March 2026 04:56:03 +0000 (0:00:02.052) 0:01:30.517 ********** 2026-03-29 04:56:10.103291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.103302 | orchestrator | 2026-03-29 04:56:10.103313 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 04:56:10.103324 | orchestrator | Sunday 29 March 2026 04:56:04 +0000 (0:00:01.120) 0:01:31.637 ********** 2026-03-29 04:56:10.103334 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.103345 | orchestrator | 2026-03-29 04:56:10.103356 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 04:56:10.103366 | orchestrator | Sunday 29 March 2026 04:56:05 +0000 (0:00:01.249) 0:01:32.887 ********** 2026-03-29 04:56:10.103386 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.103397 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:10.103408 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:10.103418 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:10.103429 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:10.103440 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:10.103451 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:10.103461 | orchestrator | 2026-03-29 04:56:10.103472 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 04:56:10.103483 | orchestrator | Sunday 29 March 2026 04:56:08 +0000 (0:00:02.341) 0:01:35.228 ********** 2026-03-29 04:56:10.103494 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:10.103504 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:10.103515 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:10.103526 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:10.103536 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:10.103547 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:10.103567 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340080 | orchestrator | 2026-03-29 04:56:20.340164 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 04:56:20.340172 | orchestrator | Sunday 29 March 2026 04:56:10 +0000 (0:00:01.903) 0:01:37.132 ********** 2026-03-29 04:56:20.340229 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.340234 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.340238 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.340242 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.340246 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:20.340250 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:20.340254 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340258 | orchestrator | 2026-03-29 04:56:20.340263 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 04:56:20.340267 | orchestrator | Sunday 29 March 2026 04:56:12 +0000 (0:00:02.022) 0:01:39.155 ********** 2026-03-29 04:56:20.340270 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.340274 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.340278 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.340282 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.340286 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:20.340289 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:20.340293 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340297 | orchestrator | 2026-03-29 04:56:20.340300 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 04:56:20.340304 | orchestrator | Sunday 29 March 2026 04:56:13 +0000 (0:00:01.874) 0:01:41.029 ********** 2026-03-29 04:56:20.340308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.340312 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.340316 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.340320 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.340335 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:20.340339 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:20.340343 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340346 | orchestrator | 2026-03-29 04:56:20.340350 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 04:56:20.340354 | orchestrator | Sunday 29 March 2026 04:56:16 +0000 (0:00:02.117) 0:01:43.147 ********** 2026-03-29 04:56:20.340357 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.340361 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.340365 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.340368 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.340372 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:20.340376 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:20.340379 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340396 | orchestrator | 2026-03-29 04:56:20.340400 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 04:56:20.340404 | orchestrator | Sunday 29 March 2026 04:56:17 +0000 (0:00:01.878) 0:01:45.026 ********** 2026-03-29 04:56:20.340408 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.340412 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.340416 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.340419 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.340423 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:20.340427 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:20.340430 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:20.340434 | orchestrator | 2026-03-29 04:56:20.340437 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 04:56:20.340441 | orchestrator | Sunday 29 March 2026 04:56:20 +0000 (0:00:02.052) 0:01:47.078 ********** 2026-03-29 04:56:20.340447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:20.340480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:20.340506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.340515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:20.475748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee30bf19', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:20.475835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475859 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:20.475872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.475906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:20.475926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b0adc3c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:20.660587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660609 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:20.660638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'uuids': ['da8fc11e-6dfb-4dbe-b694-e6f7cad69a1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl']}})  2026-03-29 04:56:20.660692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be2200f0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:20.660710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c']}})  2026-03-29 04:56:20.660727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.660761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:20.660802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP', 'dm-uuid-CRYPT-LUKS2-d6bcf8282f5d4cd9b60620cb55b2c90a-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953467 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:20.953481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'uuids': ['d6bcf828-2f5d-4cd9-b606-20cb55b2c90a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP']}})  2026-03-29 04:56:20.953493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f']}})  2026-03-29 04:56:20.953504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ccc377a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:20.953612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl', 'dm-uuid-CRYPT-LUKS2-da8fc11e6dfb4dbeb694e6f7cad69a1a-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:20.953666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'uuids': ['f13fc2e4-c586-4a34-95a4-f625771d43e0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp']}})  2026-03-29 04:56:20.953686 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:20.953708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93baa594', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:21.015636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056']}})  2026-03-29 04:56:21.015723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:21.015750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW', 'dm-uuid-CRYPT-LUKS2-a49c734036574bbbb8952c2cd9942323-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'uuids': ['a49c7340-3657-4bbb-b895-2c2cd9942323'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW']}})  2026-03-29 04:56:21.015818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948']}})  2026-03-29 04:56:21.015825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36bedc35', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:21.015848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.015859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp', 'dm-uuid-CRYPT-LUKS2-f13fc2e4c5864a3495a4f625771d43e0-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'uuids': ['5145feac-f6a0-43d9-bef0-ff6b872aac71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD']}})  2026-03-29 04:56:21.150296 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:21.150309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ef57056d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:21.150346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844']}})  2026-03-29 04:56:21.150359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:21.150422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe', 'dm-uuid-CRYPT-LUKS2-36d885a21b3e42128c82194bcbfb2fb2-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'uuids': ['36d885a2-1b3e-4212-8c82-194bcbfb2fb2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe']}})  2026-03-29 04:56:21.150475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33']}})  2026-03-29 04:56:21.150487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:21.150518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '160e36ea', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:22.520650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD', 'dm-uuid-CRYPT-LUKS2-5145feacf6a043d9bef0ff6b872aac71-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520815 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-38-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 04:56:22.520826 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:22.520837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520894 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520911 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '641edd66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 04:56:22.520927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520942 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 04:56:22.520964 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:22.520982 | orchestrator | 2026-03-29 04:56:22.521007 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 04:56:22.521023 | orchestrator | Sunday 29 March 2026 04:56:22 +0000 (0:00:02.321) 0:01:49.400 ********** 2026-03-29 04:56:22.521051 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679915 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.679994 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.680050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.680067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.680078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.680097 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:22.680111 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.680130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822076 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822155 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822221 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822245 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee30bf19', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822283 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822290 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822303 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:22.822311 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822327 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.822340 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946810 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946832 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946873 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946913 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b0adc3c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946945 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946970 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.946985 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:22.947001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.947017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'uuids': ['da8fc11e-6dfb-4dbe-b694-e6f7cad69a1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:22.947043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be2200f0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP', 'dm-uuid-CRYPT-LUKS2-d6bcf8282f5d4cd9b60620cb55b2c90a-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'uuids': ['d6bcf828-2f5d-4cd9-b606-20cb55b2c90a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.133976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.134006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ccc377a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'uuids': ['f13fc2e4-c586-4a34-95a4-f625771d43e0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93baa594', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.330997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl', 'dm-uuid-CRYPT-LUKS2-da8fc11e6dfb4dbeb694e6f7cad69a1a-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331086 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:23.331116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.331207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW', 'dm-uuid-CRYPT-LUKS2-a49c734036574bbbb8952c2cd9942323-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'uuids': ['a49c7340-3657-4bbb-b895-2c2cd9942323'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'uuids': ['5145feac-f6a0-43d9-bef0-ff6b872aac71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ef57056d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.397984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.398083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36bedc35', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441439 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp', 'dm-uuid-CRYPT-LUKS2-f13fc2e4c5864a3495a4f625771d43e0-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe', 'dm-uuid-CRYPT-LUKS2-36d885a21b3e42128c82194bcbfb2fb2-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'uuids': ['36d885a2-1b3e-4212-8c82-194bcbfb2fb2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33']}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.441810 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563049 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:23.563204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '160e36ea', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD', 'dm-uuid-CRYPT-LUKS2-5145feacf6a043d9bef0ff6b872aac71-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563292 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:23.563301 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563317 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563332 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563342 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-38-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563352 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:23.563367 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:41.323062 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:41.323226 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '641edd66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:41.323241 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:41.323261 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 04:56:41.323270 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:41.323286 | orchestrator | 2026-03-29 04:56:41.323294 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 04:56:41.323302 | orchestrator | Sunday 29 March 2026 04:56:24 +0000 (0:00:02.322) 0:01:51.722 ********** 2026-03-29 04:56:41.323309 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:41.323317 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:41.323324 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:41.323332 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:41.323339 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:41.323346 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:41.323353 | orchestrator | ok: [testbed-manager] 2026-03-29 04:56:41.323360 | orchestrator | 2026-03-29 04:56:41.323368 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 04:56:41.323375 | orchestrator | Sunday 29 March 2026 04:56:27 +0000 (0:00:02.520) 0:01:54.243 ********** 2026-03-29 04:56:41.323382 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:41.323389 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:41.323397 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:41.323404 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:41.323422 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:41.323438 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:41.323445 | orchestrator | ok: [testbed-manager] 2026-03-29 04:56:41.323453 | orchestrator | 2026-03-29 04:56:41.323460 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 04:56:41.323467 | orchestrator | Sunday 29 March 2026 04:56:29 +0000 (0:00:01.962) 0:01:56.206 ********** 2026-03-29 04:56:41.323474 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:56:41.323482 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:56:41.323489 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:41.323496 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:56:41.323504 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:56:41.323511 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:56:41.323518 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:56:41.323525 | orchestrator | 2026-03-29 04:56:41.323533 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 04:56:41.323540 | orchestrator | Sunday 29 March 2026 04:56:31 +0000 (0:00:02.705) 0:01:58.911 ********** 2026-03-29 04:56:41.323612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:41.323628 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:41.323636 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:41.323645 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:41.323653 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:41.323665 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:41.323674 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:41.323683 | orchestrator | 2026-03-29 04:56:41.323691 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 04:56:41.323700 | orchestrator | Sunday 29 March 2026 04:56:33 +0000 (0:00:01.807) 0:02:00.719 ********** 2026-03-29 04:56:41.323709 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:41.323717 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:41.323725 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:41.323733 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:41.323741 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:41.323750 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:41.323758 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-29 04:56:41.323766 | orchestrator | 2026-03-29 04:56:41.323774 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 04:56:41.323783 | orchestrator | Sunday 29 March 2026 04:56:36 +0000 (0:00:02.703) 0:02:03.422 ********** 2026-03-29 04:56:41.323791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:56:41.323799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:56:41.323807 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:56:41.323817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:56:41.323825 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:56:41.323839 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:56:41.323847 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:56:41.323856 | orchestrator | 2026-03-29 04:56:41.323864 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 04:56:41.323872 | orchestrator | Sunday 29 March 2026 04:56:38 +0000 (0:00:01.947) 0:02:05.369 ********** 2026-03-29 04:56:41.323881 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:56:41.323889 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-29 04:56:41.323898 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 04:56:41.323906 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-29 04:56:41.323915 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-29 04:56:41.323923 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-29 04:56:41.323932 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 04:56:41.323940 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 04:56:41.323949 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 04:56:41.323957 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-29 04:56:41.323964 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-29 04:56:41.323971 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 04:56:41.323978 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 04:56:41.323985 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 04:56:41.323993 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 04:56:41.324000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 04:56:41.324007 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 04:56:41.324014 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 04:56:41.324021 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 04:56:41.324028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 04:56:41.324035 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 04:56:41.324042 | orchestrator | 2026-03-29 04:56:41.324050 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 04:56:41.324063 | orchestrator | Sunday 29 March 2026 04:56:41 +0000 (0:00:02.980) 0:02:08.350 ********** 2026-03-29 04:57:22.629711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 04:57:22.629828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 04:57:22.629845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 04:57:22.629857 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:22.629868 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 04:57:22.629879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 04:57:22.629890 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 04:57:22.629901 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:22.629919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 04:57:22.629937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 04:57:22.629956 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 04:57:22.629974 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:22.629994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 04:57:22.630012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 04:57:22.630109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 04:57:22.630131 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.630150 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 04:57:22.630167 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 04:57:22.630215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 04:57:22.630261 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.630282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 04:57:22.630303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 04:57:22.630323 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 04:57:22.630341 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.630360 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 04:57:22.630379 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 04:57:22.630399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 04:57:22.630435 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:22.630455 | orchestrator | 2026-03-29 04:57:22.630475 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 04:57:22.630496 | orchestrator | Sunday 29 March 2026 04:56:43 +0000 (0:00:02.079) 0:02:10.430 ********** 2026-03-29 04:57:22.630515 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:22.630536 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:22.630555 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:22.630575 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:22.630587 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:57:22.630598 | orchestrator | 2026-03-29 04:57:22.630609 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 04:57:22.630621 | orchestrator | Sunday 29 March 2026 04:56:45 +0000 (0:00:02.074) 0:02:12.504 ********** 2026-03-29 04:57:22.630632 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.630658 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.630669 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.630690 | orchestrator | 2026-03-29 04:57:22.630701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 04:57:22.630711 | orchestrator | Sunday 29 March 2026 04:56:47 +0000 (0:00:01.580) 0:02:14.085 ********** 2026-03-29 04:57:22.630722 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.630733 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.630743 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.630754 | orchestrator | 2026-03-29 04:57:22.630765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 04:57:22.630775 | orchestrator | Sunday 29 March 2026 04:56:48 +0000 (0:00:01.479) 0:02:15.565 ********** 2026-03-29 04:57:22.630786 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.630797 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.630808 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.630819 | orchestrator | 2026-03-29 04:57:22.630829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 04:57:22.630840 | orchestrator | Sunday 29 March 2026 04:56:49 +0000 (0:00:01.340) 0:02:16.905 ********** 2026-03-29 04:57:22.630851 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:57:22.630863 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:57:22.630873 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:57:22.630884 | orchestrator | 2026-03-29 04:57:22.630895 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 04:57:22.630906 | orchestrator | Sunday 29 March 2026 04:56:51 +0000 (0:00:01.540) 0:02:18.446 ********** 2026-03-29 04:57:22.630917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 04:57:22.630936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 04:57:22.630955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 04:57:22.630975 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.630994 | orchestrator | 2026-03-29 04:57:22.631013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 04:57:22.631027 | orchestrator | Sunday 29 March 2026 04:56:53 +0000 (0:00:01.652) 0:02:20.099 ********** 2026-03-29 04:57:22.631048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 04:57:22.631059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 04:57:22.631069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 04:57:22.631080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.631091 | orchestrator | 2026-03-29 04:57:22.631102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 04:57:22.631133 | orchestrator | Sunday 29 March 2026 04:56:54 +0000 (0:00:01.621) 0:02:21.720 ********** 2026-03-29 04:57:22.631145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 04:57:22.631155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 04:57:22.631166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 04:57:22.631212 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.631231 | orchestrator | 2026-03-29 04:57:22.631248 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 04:57:22.631267 | orchestrator | Sunday 29 March 2026 04:56:56 +0000 (0:00:01.587) 0:02:23.308 ********** 2026-03-29 04:57:22.631279 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:57:22.631289 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:57:22.631300 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:57:22.631311 | orchestrator | 2026-03-29 04:57:22.631321 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 04:57:22.631332 | orchestrator | Sunday 29 March 2026 04:56:57 +0000 (0:00:01.386) 0:02:24.694 ********** 2026-03-29 04:57:22.631343 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 04:57:22.631353 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 04:57:22.631364 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 04:57:22.631374 | orchestrator | 2026-03-29 04:57:22.631385 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 04:57:22.631396 | orchestrator | Sunday 29 March 2026 04:56:59 +0000 (0:00:01.624) 0:02:26.319 ********** 2026-03-29 04:57:22.631406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:57:22.631417 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 04:57:22.631429 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 04:57:22.631440 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 04:57:22.631450 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 04:57:22.631461 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 04:57:22.631475 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 04:57:22.631493 | orchestrator | 2026-03-29 04:57:22.631521 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 04:57:22.631540 | orchestrator | Sunday 29 March 2026 04:57:01 +0000 (0:00:02.039) 0:02:28.358 ********** 2026-03-29 04:57:22.631558 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 04:57:22.631578 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 04:57:22.631596 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 04:57:22.631611 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 04:57:22.631621 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 04:57:22.631632 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 04:57:22.631642 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 04:57:22.631653 | orchestrator | 2026-03-29 04:57:22.631663 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-29 04:57:22.631682 | orchestrator | Sunday 29 March 2026 04:57:04 +0000 (0:00:02.876) 0:02:31.234 ********** 2026-03-29 04:57:22.631693 | orchestrator | changed: [testbed-manager] 2026-03-29 04:57:22.631703 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:57:22.631714 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:57:22.631724 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:57:22.631735 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:57:22.631745 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:57:22.631756 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:57:22.631766 | orchestrator | 2026-03-29 04:57:22.631777 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-29 04:57:22.631788 | orchestrator | Sunday 29 March 2026 04:57:15 +0000 (0:00:11.520) 0:02:42.755 ********** 2026-03-29 04:57:22.631804 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:22.631821 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:22.631840 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:22.631858 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.631877 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.631895 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.631911 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:22.631922 | orchestrator | 2026-03-29 04:57:22.631932 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-29 04:57:22.631943 | orchestrator | Sunday 29 March 2026 04:57:17 +0000 (0:00:02.030) 0:02:44.785 ********** 2026-03-29 04:57:22.631957 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:22.631980 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:22.632008 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:22.632025 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:22.632043 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:22.632060 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:22.632077 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:22.632093 | orchestrator | 2026-03-29 04:57:22.632109 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-29 04:57:22.632127 | orchestrator | Sunday 29 March 2026 04:57:19 +0000 (0:00:01.896) 0:02:46.682 ********** 2026-03-29 04:57:22.632145 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:22.632162 | orchestrator | changed: [testbed-node-1] 2026-03-29 04:57:22.632269 | orchestrator | changed: [testbed-node-0] 2026-03-29 04:57:22.632287 | orchestrator | changed: [testbed-node-2] 2026-03-29 04:57:22.632298 | orchestrator | changed: [testbed-node-3] 2026-03-29 04:57:22.632309 | orchestrator | changed: [testbed-node-4] 2026-03-29 04:57:22.632320 | orchestrator | changed: [testbed-node-5] 2026-03-29 04:57:22.632330 | orchestrator | 2026-03-29 04:57:22.632353 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-29 04:57:57.593894 | orchestrator | Sunday 29 March 2026 04:57:22 +0000 (0:00:02.970) 0:02:49.653 ********** 2026-03-29 04:57:57.594105 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 04:57:57.594138 | orchestrator | 2026-03-29 04:57:57.594152 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-29 04:57:57.594164 | orchestrator | Sunday 29 March 2026 04:57:25 +0000 (0:00:02.824) 0:02:52.478 ********** 2026-03-29 04:57:57.594222 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594235 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594246 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594257 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594268 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594279 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594290 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594301 | orchestrator | 2026-03-29 04:57:57.594313 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-29 04:57:57.594349 | orchestrator | Sunday 29 March 2026 04:57:27 +0000 (0:00:01.849) 0:02:54.327 ********** 2026-03-29 04:57:57.594360 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594371 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594382 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594393 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594404 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594418 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594430 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594443 | orchestrator | 2026-03-29 04:57:57.594455 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-29 04:57:57.594468 | orchestrator | Sunday 29 March 2026 04:57:29 +0000 (0:00:02.039) 0:02:56.367 ********** 2026-03-29 04:57:57.594480 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594492 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594518 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594531 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594543 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594570 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594582 | orchestrator | 2026-03-29 04:57:57.594595 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-29 04:57:57.594608 | orchestrator | Sunday 29 March 2026 04:57:31 +0000 (0:00:01.945) 0:02:58.313 ********** 2026-03-29 04:57:57.594621 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594633 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594645 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594657 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594670 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594683 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594696 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594708 | orchestrator | 2026-03-29 04:57:57.594720 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-29 04:57:57.594733 | orchestrator | Sunday 29 March 2026 04:57:33 +0000 (0:00:02.059) 0:03:00.372 ********** 2026-03-29 04:57:57.594745 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594758 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594770 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594781 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594791 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594802 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594813 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594823 | orchestrator | 2026-03-29 04:57:57.594834 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-29 04:57:57.594846 | orchestrator | Sunday 29 March 2026 04:57:35 +0000 (0:00:01.983) 0:03:02.356 ********** 2026-03-29 04:57:57.594856 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594868 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594878 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594889 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.594899 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.594910 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.594921 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.594931 | orchestrator | 2026-03-29 04:57:57.594942 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-29 04:57:57.594953 | orchestrator | Sunday 29 March 2026 04:57:37 +0000 (0:00:02.040) 0:03:04.396 ********** 2026-03-29 04:57:57.594964 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.594976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.594987 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.594997 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595008 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595019 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595038 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595049 | orchestrator | 2026-03-29 04:57:57.595060 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-29 04:57:57.595071 | orchestrator | Sunday 29 March 2026 04:57:39 +0000 (0:00:01.990) 0:03:06.387 ********** 2026-03-29 04:57:57.595082 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595092 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595103 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595114 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595125 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595135 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595146 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595157 | orchestrator | 2026-03-29 04:57:57.595167 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-29 04:57:57.595208 | orchestrator | Sunday 29 March 2026 04:57:41 +0000 (0:00:02.086) 0:03:08.474 ********** 2026-03-29 04:57:57.595219 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595230 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595241 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595251 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595262 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595291 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595303 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595314 | orchestrator | 2026-03-29 04:57:57.595324 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-29 04:57:57.595335 | orchestrator | Sunday 29 March 2026 04:57:43 +0000 (0:00:01.997) 0:03:10.471 ********** 2026-03-29 04:57:57.595346 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595357 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595367 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595378 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595388 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595399 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595410 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595420 | orchestrator | 2026-03-29 04:57:57.595431 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-29 04:57:57.595442 | orchestrator | Sunday 29 March 2026 04:57:45 +0000 (0:00:01.975) 0:03:12.447 ********** 2026-03-29 04:57:57.595452 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595463 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595474 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595484 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595495 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595506 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595516 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595527 | orchestrator | 2026-03-29 04:57:57.595537 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-29 04:57:57.595548 | orchestrator | Sunday 29 March 2026 04:57:47 +0000 (0:00:02.018) 0:03:14.465 ********** 2026-03-29 04:57:57.595559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595570 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595580 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595591 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595602 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595623 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595634 | orchestrator | 2026-03-29 04:57:57.595645 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-29 04:57:57.595656 | orchestrator | Sunday 29 March 2026 04:57:49 +0000 (0:00:01.933) 0:03:16.399 ********** 2026-03-29 04:57:57.595672 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595683 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595701 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 04:57:57.595727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 04:57:57.595737 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 04:57:57.595759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 04:57:57.595770 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 04:57:57.595792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 04:57:57.595803 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595813 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595824 | orchestrator | 2026-03-29 04:57:57.595835 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-29 04:57:57.595846 | orchestrator | Sunday 29 March 2026 04:57:51 +0000 (0:00:02.112) 0:03:18.512 ********** 2026-03-29 04:57:57.595857 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595868 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595879 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595889 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.595900 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.595910 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.595921 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.595932 | orchestrator | 2026-03-29 04:57:57.595942 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-29 04:57:57.595953 | orchestrator | Sunday 29 March 2026 04:57:53 +0000 (0:00:02.073) 0:03:20.586 ********** 2026-03-29 04:57:57.595964 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.595975 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.595985 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.595996 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.596007 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.596017 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.596028 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.596039 | orchestrator | 2026-03-29 04:57:57.596049 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-29 04:57:57.596060 | orchestrator | Sunday 29 March 2026 04:57:55 +0000 (0:00:02.207) 0:03:22.794 ********** 2026-03-29 04:57:57.596071 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:57:57.596081 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:57:57.596092 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:57:57.596103 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:57:57.596113 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:57:57.596124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:57:57.596135 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:57:57.596146 | orchestrator | 2026-03-29 04:57:57.596163 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-29 04:58:17.949459 | orchestrator | Sunday 29 March 2026 04:57:57 +0000 (0:00:01.831) 0:03:24.625 ********** 2026-03-29 04:58:17.949576 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:17.949593 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:17.949606 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:17.949649 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.949661 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.949673 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.949685 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:17.949697 | orchestrator | 2026-03-29 04:58:17.949709 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-29 04:58:17.949721 | orchestrator | Sunday 29 March 2026 04:57:59 +0000 (0:00:02.151) 0:03:26.777 ********** 2026-03-29 04:58:17.949733 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:17.949744 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:17.949756 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:17.949766 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.949777 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.949787 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.949797 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:17.949808 | orchestrator | 2026-03-29 04:58:17.949819 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-29 04:58:17.949829 | orchestrator | Sunday 29 March 2026 04:58:01 +0000 (0:00:02.004) 0:03:28.781 ********** 2026-03-29 04:58:17.949840 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:17.949851 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:17.949862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:17.949872 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.949883 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.949893 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.949903 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:17.949914 | orchestrator | 2026-03-29 04:58:17.949925 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-29 04:58:17.949936 | orchestrator | Sunday 29 March 2026 04:58:03 +0000 (0:00:01.817) 0:03:30.599 ********** 2026-03-29 04:58:17.949946 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:17.949958 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:17.949969 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:17.949993 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:17.950006 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:58:17.950078 | orchestrator | 2026-03-29 04:58:17.950091 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-29 04:58:17.950104 | orchestrator | Sunday 29 March 2026 04:58:05 +0000 (0:00:02.414) 0:03:33.014 ********** 2026-03-29 04:58:17.950116 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:58:17.950129 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:58:17.950141 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:58:17.950153 | orchestrator | 2026-03-29 04:58:17.950164 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-29 04:58:17.950222 | orchestrator | Sunday 29 March 2026 04:58:07 +0000 (0:00:01.402) 0:03:34.416 ********** 2026-03-29 04:58:17.950237 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 04:58:17.950251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 04:58:17.950263 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 04:58:17.950287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 04:58:17.950299 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 04:58:17.950333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 04:58:17.950346 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950358 | orchestrator | 2026-03-29 04:58:17.950369 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-29 04:58:17.950381 | orchestrator | Sunday 29 March 2026 04:58:08 +0000 (0:00:01.364) 0:03:35.781 ********** 2026-03-29 04:58:17.950394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950419 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950470 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:17.950504 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950515 | orchestrator | 2026-03-29 04:58:17.950526 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-29 04:58:17.950542 | orchestrator | Sunday 29 March 2026 04:58:10 +0000 (0:00:01.594) 0:03:37.375 ********** 2026-03-29 04:58:17.950553 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950563 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950574 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950585 | orchestrator | 2026-03-29 04:58:17.950596 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-29 04:58:17.950606 | orchestrator | Sunday 29 March 2026 04:58:11 +0000 (0:00:01.378) 0:03:38.754 ********** 2026-03-29 04:58:17.950617 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950627 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950638 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950649 | orchestrator | 2026-03-29 04:58:17.950660 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-29 04:58:17.950670 | orchestrator | Sunday 29 March 2026 04:58:13 +0000 (0:00:01.318) 0:03:40.072 ********** 2026-03-29 04:58:17.950687 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950698 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950709 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950720 | orchestrator | 2026-03-29 04:58:17.950731 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-29 04:58:17.950741 | orchestrator | Sunday 29 March 2026 04:58:14 +0000 (0:00:01.339) 0:03:41.412 ********** 2026-03-29 04:58:17.950751 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:17.950762 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:17.950773 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:17.950783 | orchestrator | 2026-03-29 04:58:17.950793 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-29 04:58:17.950804 | orchestrator | Sunday 29 March 2026 04:58:15 +0000 (0:00:01.385) 0:03:42.798 ********** 2026-03-29 04:58:17.950815 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 04:58:17.950827 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 04:58:17.950838 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 04:58:17.950848 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 04:58:17.950859 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 04:58:17.950870 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 04:58:17.950881 | orchestrator | 2026-03-29 04:58:17.950892 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-29 04:58:17.950903 | orchestrator | Sunday 29 March 2026 04:58:17 +0000 (0:00:02.065) 0:03:44.863 ********** 2026-03-29 04:58:17.950928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c/osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1774752755.2604833, 'mtime': 1774752755.2554832, 'ctime': 1774752755.2554832, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c/osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-09734191-f9bf-5626-be02-fa226447c12f/osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1774752776.4328551, 'mtime': 1774752776.428855, 'ctime': 1774752776.428855, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-09734191-f9bf-5626-be02-fa226447c12f/osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751449 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:20.751469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-df205cf6-8b40-53f0-aec9-c93c6a681056/osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774752755.020052, 'mtime': 1774752755.0150518, 'ctime': 1774752755.0150518, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-df205cf6-8b40-53f0-aec9-c93c6a681056/osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948/osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774752776.2754204, 'mtime': 1774752776.2704203, 'ctime': 1774752776.2704203, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948/osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751495 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:20.751533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844/osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774752755.5348525, 'mtime': 1774752755.5318525, 'ctime': 1774752755.5318525, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844/osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33/osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774752774.012181, 'mtime': 1774752774.009181, 'ctime': 1774752774.009181, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33/osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:20.751569 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:20.751580 | orchestrator | 2026-03-29 04:58:20.751592 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-29 04:58:20.751605 | orchestrator | Sunday 29 March 2026 04:58:19 +0000 (0:00:01.471) 0:03:46.335 ********** 2026-03-29 04:58:20.751616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 04:58:20.751629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 04:58:20.751640 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:20.751652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 04:58:20.751663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 04:58:20.751674 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:20.751685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 04:58:20.751696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 04:58:20.751707 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:20.751718 | orchestrator | 2026-03-29 04:58:20.751729 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-29 04:58:20.751747 | orchestrator | Sunday 29 March 2026 04:58:20 +0000 (0:00:01.354) 0:03:47.690 ********** 2026-03-29 04:58:20.751767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833582 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:30.833599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833623 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:30.833635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833657 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:30.833668 | orchestrator | 2026-03-29 04:58:30.833680 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-29 04:58:30.833693 | orchestrator | Sunday 29 March 2026 04:58:21 +0000 (0:00:01.354) 0:03:49.044 ********** 2026-03-29 04:58:30.833705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 04:58:30.833718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 04:58:30.833729 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:30.833740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 04:58:30.833751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 04:58:30.833763 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:30.833783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 04:58:30.833803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 04:58:30.833823 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:30.833860 | orchestrator | 2026-03-29 04:58:30.833873 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-29 04:58:30.833885 | orchestrator | Sunday 29 March 2026 04:58:23 +0000 (0:00:01.579) 0:03:50.623 ********** 2026-03-29 04:58:30.833896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833919 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:30.833948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.833982 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:30.833996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.834009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 04:58:30.834097 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:30.834117 | orchestrator | 2026-03-29 04:58:30.834138 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-29 04:58:30.834157 | orchestrator | Sunday 29 March 2026 04:58:24 +0000 (0:00:01.412) 0:03:52.036 ********** 2026-03-29 04:58:30.834200 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:30.834213 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:30.834224 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:30.834235 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:30.834246 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:30.834256 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:30.834267 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:30.834278 | orchestrator | 2026-03-29 04:58:30.834288 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-29 04:58:30.834299 | orchestrator | Sunday 29 March 2026 04:58:26 +0000 (0:00:01.828) 0:03:53.865 ********** 2026-03-29 04:58:30.834310 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:30.834320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:30.834331 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:30.834342 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:30.834353 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 04:58:30.834364 | orchestrator | 2026-03-29 04:58:30.834375 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-29 04:58:30.834405 | orchestrator | Sunday 29 March 2026 04:58:29 +0000 (0:00:02.569) 0:03:56.435 ********** 2026-03-29 04:58:30.834433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834524 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:30.834541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834631 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:30.834649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:30.834718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.116865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117000 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117013 | orchestrator | 2026-03-29 04:58:48.117020 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-29 04:58:48.117027 | orchestrator | Sunday 29 March 2026 04:58:30 +0000 (0:00:01.425) 0:03:57.860 ********** 2026-03-29 04:58:48.117033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117060 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117108 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117138 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117143 | orchestrator | 2026-03-29 04:58:48.117149 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-29 04:58:48.117154 | orchestrator | Sunday 29 March 2026 04:58:32 +0000 (0:00:01.689) 0:03:59.550 ********** 2026-03-29 04:58:48.117159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117259 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117303 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 04:58:48.117366 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117373 | orchestrator | 2026-03-29 04:58:48.117380 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-29 04:58:48.117417 | orchestrator | Sunday 29 March 2026 04:58:33 +0000 (0:00:01.429) 0:04:00.979 ********** 2026-03-29 04:58:48.117427 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117436 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117444 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117453 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117461 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117470 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117505 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117513 | orchestrator | 2026-03-29 04:58:48.117519 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-29 04:58:48.117525 | orchestrator | Sunday 29 March 2026 04:58:35 +0000 (0:00:01.898) 0:04:02.878 ********** 2026-03-29 04:58:48.117531 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117537 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117543 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117549 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117555 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117561 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117567 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117573 | orchestrator | 2026-03-29 04:58:48.117579 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-29 04:58:48.117585 | orchestrator | Sunday 29 March 2026 04:58:37 +0000 (0:00:02.048) 0:04:04.926 ********** 2026-03-29 04:58:48.117591 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117596 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117603 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117608 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117615 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117620 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117626 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117632 | orchestrator | 2026-03-29 04:58:48.117638 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-29 04:58:48.117645 | orchestrator | Sunday 29 March 2026 04:58:39 +0000 (0:00:01.986) 0:04:06.912 ********** 2026-03-29 04:58:48.117651 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117657 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117663 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117669 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117675 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117680 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117685 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117690 | orchestrator | 2026-03-29 04:58:48.117715 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-29 04:58:48.117722 | orchestrator | Sunday 29 March 2026 04:58:41 +0000 (0:00:01.922) 0:04:08.834 ********** 2026-03-29 04:58:48.117727 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117737 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117742 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117747 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117752 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117764 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117769 | orchestrator | 2026-03-29 04:58:48.117774 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-29 04:58:48.117779 | orchestrator | Sunday 29 March 2026 04:58:43 +0000 (0:00:02.024) 0:04:10.860 ********** 2026-03-29 04:58:48.117784 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117789 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117794 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117799 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117804 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117809 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117814 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117819 | orchestrator | 2026-03-29 04:58:48.117824 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-29 04:58:48.117830 | orchestrator | Sunday 29 March 2026 04:58:45 +0000 (0:00:02.045) 0:04:12.905 ********** 2026-03-29 04:58:48.117835 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:48.117840 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:48.117845 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:48.117850 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:48.117855 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:48.117860 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:48.117865 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:48.117870 | orchestrator | 2026-03-29 04:58:48.117875 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-29 04:58:48.117880 | orchestrator | Sunday 29 March 2026 04:58:47 +0000 (0:00:02.096) 0:04:15.001 ********** 2026-03-29 04:58:48.117891 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.926823 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.926905 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.926916 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.926923 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.926931 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.926937 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:50.926944 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.926950 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.926956 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.926962 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.926967 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.926992 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.926998 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:50.927004 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.927010 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.927015 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.927021 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.927027 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.927032 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.927038 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:50.927048 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.927057 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.927066 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.927075 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.927099 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.927117 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.927129 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.927135 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.927141 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.927147 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.927152 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.927163 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.927169 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:50.927230 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.927238 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:50.927244 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.927250 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:50.927256 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:50.927262 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.927267 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.927273 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:50.927279 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.927285 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.927290 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:50.927296 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:50.927302 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:50.927308 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:50.927313 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:50.927323 | orchestrator | 2026-03-29 04:58:50.927332 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-29 04:58:50.927343 | orchestrator | Sunday 29 March 2026 04:58:50 +0000 (0:00:02.157) 0:04:17.159 ********** 2026-03-29 04:58:50.927353 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:50.927362 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:50.927373 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:50.927391 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:54.274561 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:54.274690 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:58:54.274722 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:54.274735 | orchestrator | 2026-03-29 04:58:54.274747 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-29 04:58:54.274760 | orchestrator | Sunday 29 March 2026 04:58:52 +0000 (0:00:02.091) 0:04:19.250 ********** 2026-03-29 04:58:54.274773 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.274807 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.274821 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.274833 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.274845 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.274857 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.274868 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:58:54.274879 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.274890 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.274900 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.274911 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.274922 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.274933 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.274943 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:58:54.274954 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.274965 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.274975 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.274986 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.274997 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.275007 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.275018 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:58:54.275029 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.275065 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.275083 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.275096 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.275109 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.275122 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.275135 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.275147 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.275159 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.275171 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.275215 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.275229 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.275241 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:58:54.275253 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.275264 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:58:54.275274 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.275285 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 04:58:54.275295 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 04:58:54.275306 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.275317 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.275327 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.275345 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:58:54.275356 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:58:54.275367 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 04:58:54.275377 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 04:58:54.275388 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 04:58:54.275416 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 04:59:34.843482 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843576 | orchestrator | 2026-03-29 04:59:34.843586 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-29 04:59:34.843594 | orchestrator | Sunday 29 March 2026 04:58:54 +0000 (0:00:02.052) 0:04:21.303 ********** 2026-03-29 04:59:34.843601 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.843607 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.843614 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.843620 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.843627 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.843633 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843639 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.843646 | orchestrator | 2026-03-29 04:59:34.843652 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-29 04:59:34.843658 | orchestrator | Sunday 29 March 2026 04:58:56 +0000 (0:00:02.362) 0:04:23.666 ********** 2026-03-29 04:59:34.843664 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.843671 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.843677 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.843683 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.843689 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.843695 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843701 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.843707 | orchestrator | 2026-03-29 04:59:34.843713 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-29 04:59:34.843720 | orchestrator | Sunday 29 March 2026 04:58:58 +0000 (0:00:02.071) 0:04:25.737 ********** 2026-03-29 04:59:34.843726 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.843732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.843738 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.843744 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.843750 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.843756 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843763 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.843769 | orchestrator | 2026-03-29 04:59:34.843775 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-29 04:59:34.843781 | orchestrator | Sunday 29 March 2026 04:59:00 +0000 (0:00:02.291) 0:04:28.029 ********** 2026-03-29 04:59:34.843789 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 04:59:34.843797 | orchestrator | 2026-03-29 04:59:34.843803 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-29 04:59:34.843809 | orchestrator | Sunday 29 March 2026 04:59:03 +0000 (0:00:02.613) 0:04:30.642 ********** 2026-03-29 04:59:34.843835 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843842 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843848 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843854 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843860 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843866 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843872 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 04:59:34.843878 | orchestrator | 2026-03-29 04:59:34.843885 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-29 04:59:34.843891 | orchestrator | Sunday 29 March 2026 04:59:05 +0000 (0:00:01.988) 0:04:32.630 ********** 2026-03-29 04:59:34.843897 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.843903 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.843909 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.843915 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.843921 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.843927 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843933 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.843939 | orchestrator | 2026-03-29 04:59:34.843946 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-29 04:59:34.843952 | orchestrator | Sunday 29 March 2026 04:59:07 +0000 (0:00:02.145) 0:04:34.775 ********** 2026-03-29 04:59:34.843958 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.843964 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.843970 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.843977 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.843983 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.843989 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.843995 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.844001 | orchestrator | 2026-03-29 04:59:34.844007 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-29 04:59:34.844014 | orchestrator | Sunday 29 March 2026 04:59:09 +0000 (0:00:01.934) 0:04:36.710 ********** 2026-03-29 04:59:34.844020 | orchestrator | ok: [testbed-node-1] 2026-03-29 04:59:34.844028 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:59:34.844035 | orchestrator | ok: [testbed-node-2] 2026-03-29 04:59:34.844042 | orchestrator | ok: [testbed-node-3] 2026-03-29 04:59:34.844049 | orchestrator | ok: [testbed-node-4] 2026-03-29 04:59:34.844056 | orchestrator | ok: [testbed-node-5] 2026-03-29 04:59:34.844063 | orchestrator | ok: [testbed-manager] 2026-03-29 04:59:34.844070 | orchestrator | 2026-03-29 04:59:34.844077 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-29 04:59:34.844084 | orchestrator | Sunday 29 March 2026 04:59:11 +0000 (0:00:02.247) 0:04:38.957 ********** 2026-03-29 04:59:34.844091 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.844098 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.844105 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.844125 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.844143 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.844151 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.844158 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.844166 | orchestrator | 2026-03-29 04:59:34.844173 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-29 04:59:34.844225 | orchestrator | Sunday 29 March 2026 04:59:14 +0000 (0:00:02.241) 0:04:41.199 ********** 2026-03-29 04:59:34.844233 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.844247 | orchestrator | skipping: [testbed-node-1] 2026-03-29 04:59:34.844254 | orchestrator | skipping: [testbed-node-2] 2026-03-29 04:59:34.844264 | orchestrator | skipping: [testbed-node-3] 2026-03-29 04:59:34.844271 | orchestrator | skipping: [testbed-node-4] 2026-03-29 04:59:34.844278 | orchestrator | skipping: [testbed-node-5] 2026-03-29 04:59:34.844285 | orchestrator | skipping: [testbed-manager] 2026-03-29 04:59:34.844292 | orchestrator | 2026-03-29 04:59:34.844300 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-29 04:59:34.844307 | orchestrator | Sunday 29 March 2026 04:59:16 +0000 (0:00:02.547) 0:04:43.747 ********** 2026-03-29 04:59:34.844314 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:59:34.844323 | orchestrator | 2026-03-29 04:59:34.844332 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-29 04:59:34.844342 | orchestrator | Sunday 29 March 2026 04:59:19 +0000 (0:00:02.727) 0:04:46.474 ********** 2026-03-29 04:59:34.844352 | orchestrator | skipping: [testbed-node-0] 2026-03-29 04:59:34.844362 | orchestrator | 2026-03-29 04:59:34.844372 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-29 04:59:34.844383 | orchestrator | 2026-03-29 04:59:34.844392 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 04:59:34.844401 | orchestrator | Sunday 29 March 2026 04:59:21 +0000 (0:00:02.270) 0:04:48.745 ********** 2026-03-29 04:59:34.844411 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:59:34.844421 | orchestrator | 2026-03-29 04:59:34.844430 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 04:59:34.844440 | orchestrator | Sunday 29 March 2026 04:59:23 +0000 (0:00:01.489) 0:04:50.235 ********** 2026-03-29 04:59:34.844450 | orchestrator | ok: [testbed-node-0] 2026-03-29 04:59:34.844460 | orchestrator | 2026-03-29 04:59:34.844469 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-29 04:59:34.844479 | orchestrator | Sunday 29 March 2026 04:59:24 +0000 (0:00:01.115) 0:04:51.350 ********** 2026-03-29 04:59:34.844492 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 04:59:34.844504 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 04:59:34.844515 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 04:59:34.844522 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 04:59:34.844530 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 04:59:34.844547 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}])  2026-03-29 04:59:34.844555 | orchestrator | 2026-03-29 04:59:34.844561 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-29 04:59:34.844567 | orchestrator | 2026-03-29 04:59:34.844576 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-29 04:59:34.844605 | orchestrator | Sunday 29 March 2026 04:59:34 +0000 (0:00:10.516) 0:05:01.867 ********** 2026-03-29 05:00:02.315428 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315568 | orchestrator | 2026-03-29 05:00:02.315586 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-29 05:00:02.315600 | orchestrator | Sunday 29 March 2026 04:59:36 +0000 (0:00:01.481) 0:05:03.349 ********** 2026-03-29 05:00:02.315611 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315661 | orchestrator | 2026-03-29 05:00:02.315675 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-29 05:00:02.315686 | orchestrator | Sunday 29 March 2026 04:59:37 +0000 (0:00:01.110) 0:05:04.459 ********** 2026-03-29 05:00:02.315698 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:02.315710 | orchestrator | 2026-03-29 05:00:02.315721 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-29 05:00:02.315732 | orchestrator | Sunday 29 March 2026 04:59:38 +0000 (0:00:01.115) 0:05:05.575 ********** 2026-03-29 05:00:02.315743 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315754 | orchestrator | 2026-03-29 05:00:02.315765 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 05:00:02.315776 | orchestrator | Sunday 29 March 2026 04:59:39 +0000 (0:00:01.117) 0:05:06.692 ********** 2026-03-29 05:00:02.315787 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-29 05:00:02.315798 | orchestrator | 2026-03-29 05:00:02.315810 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 05:00:02.315821 | orchestrator | Sunday 29 March 2026 04:59:40 +0000 (0:00:01.097) 0:05:07.790 ********** 2026-03-29 05:00:02.315832 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315843 | orchestrator | 2026-03-29 05:00:02.315854 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 05:00:02.315865 | orchestrator | Sunday 29 March 2026 04:59:42 +0000 (0:00:01.484) 0:05:09.275 ********** 2026-03-29 05:00:02.315876 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315887 | orchestrator | 2026-03-29 05:00:02.315897 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 05:00:02.315908 | orchestrator | Sunday 29 March 2026 04:59:43 +0000 (0:00:01.121) 0:05:10.396 ********** 2026-03-29 05:00:02.315919 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.315936 | orchestrator | 2026-03-29 05:00:02.315955 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 05:00:02.315975 | orchestrator | Sunday 29 March 2026 04:59:44 +0000 (0:00:01.473) 0:05:11.870 ********** 2026-03-29 05:00:02.315995 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.316013 | orchestrator | 2026-03-29 05:00:02.316032 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 05:00:02.316048 | orchestrator | Sunday 29 March 2026 04:59:45 +0000 (0:00:01.162) 0:05:13.032 ********** 2026-03-29 05:00:02.316065 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.316081 | orchestrator | 2026-03-29 05:00:02.316118 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 05:00:02.316151 | orchestrator | Sunday 29 March 2026 04:59:47 +0000 (0:00:01.140) 0:05:14.173 ********** 2026-03-29 05:00:02.316168 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.316211 | orchestrator | 2026-03-29 05:00:02.316263 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 05:00:02.316285 | orchestrator | Sunday 29 March 2026 04:59:48 +0000 (0:00:01.139) 0:05:15.313 ********** 2026-03-29 05:00:02.316303 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:02.316320 | orchestrator | 2026-03-29 05:00:02.316338 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 05:00:02.316358 | orchestrator | Sunday 29 March 2026 04:59:49 +0000 (0:00:01.120) 0:05:16.434 ********** 2026-03-29 05:00:02.316378 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.316396 | orchestrator | 2026-03-29 05:00:02.316415 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 05:00:02.316432 | orchestrator | Sunday 29 March 2026 04:59:50 +0000 (0:00:01.125) 0:05:17.559 ********** 2026-03-29 05:00:02.316452 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:00:02.316471 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:00:02.316491 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:00:02.316508 | orchestrator | 2026-03-29 05:00:02.316524 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 05:00:02.316535 | orchestrator | Sunday 29 March 2026 04:59:52 +0000 (0:00:01.613) 0:05:19.172 ********** 2026-03-29 05:00:02.316546 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:02.316556 | orchestrator | 2026-03-29 05:00:02.316567 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 05:00:02.316578 | orchestrator | Sunday 29 March 2026 04:59:53 +0000 (0:00:01.296) 0:05:20.469 ********** 2026-03-29 05:00:02.316589 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:00:02.316600 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:00:02.316611 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:00:02.316622 | orchestrator | 2026-03-29 05:00:02.316632 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 05:00:02.316643 | orchestrator | Sunday 29 March 2026 04:59:56 +0000 (0:00:03.162) 0:05:23.631 ********** 2026-03-29 05:00:02.316654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:00:02.316666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:00:02.316677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:00:02.316688 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:02.316698 | orchestrator | 2026-03-29 05:00:02.316709 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 05:00:02.316720 | orchestrator | Sunday 29 March 2026 04:59:57 +0000 (0:00:01.373) 0:05:25.005 ********** 2026-03-29 05:00:02.316770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316809 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:02.316820 | orchestrator | 2026-03-29 05:00:02.316831 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 05:00:02.316842 | orchestrator | Sunday 29 March 2026 04:59:59 +0000 (0:00:01.906) 0:05:26.911 ********** 2026-03-29 05:00:02.316855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:02.316904 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:02.316915 | orchestrator | 2026-03-29 05:00:02.316926 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 05:00:02.316937 | orchestrator | Sunday 29 March 2026 05:00:01 +0000 (0:00:01.175) 0:05:28.087 ********** 2026-03-29 05:00:02.316949 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '76a3923fe123', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 04:59:53.952684', 'end': '2026-03-29 04:59:54.007011', 'delta': '0:00:00.054327', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['76a3923fe123'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 05:00:02.316964 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a6db66d8015c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 04:59:54.576147', 'end': '2026-03-29 04:59:54.618859', 'delta': '0:00:00.042712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6db66d8015c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 05:00:02.316988 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5a2b09aac491', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 04:59:55.386726', 'end': '2026-03-29 04:59:55.430848', 'delta': '0:00:00.044122', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5a2b09aac491'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 05:00:18.605989 | orchestrator | 2026-03-29 05:00:18.606155 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 05:00:18.606312 | orchestrator | Sunday 29 March 2026 05:00:02 +0000 (0:00:01.260) 0:05:29.347 ********** 2026-03-29 05:00:18.606328 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:18.606340 | orchestrator | 2026-03-29 05:00:18.606350 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 05:00:18.606361 | orchestrator | Sunday 29 March 2026 05:00:03 +0000 (0:00:01.298) 0:05:30.646 ********** 2026-03-29 05:00:18.606371 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606382 | orchestrator | 2026-03-29 05:00:18.606392 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 05:00:18.606403 | orchestrator | Sunday 29 March 2026 05:00:04 +0000 (0:00:01.233) 0:05:31.879 ********** 2026-03-29 05:00:18.606413 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:18.606423 | orchestrator | 2026-03-29 05:00:18.606433 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 05:00:18.606443 | orchestrator | Sunday 29 March 2026 05:00:05 +0000 (0:00:01.134) 0:05:33.014 ********** 2026-03-29 05:00:18.606452 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:00:18.606463 | orchestrator | 2026-03-29 05:00:18.606473 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:00:18.606484 | orchestrator | Sunday 29 March 2026 05:00:08 +0000 (0:00:02.057) 0:05:35.072 ********** 2026-03-29 05:00:18.606495 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:00:18.606504 | orchestrator | 2026-03-29 05:00:18.606514 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 05:00:18.606524 | orchestrator | Sunday 29 March 2026 05:00:09 +0000 (0:00:01.155) 0:05:36.227 ********** 2026-03-29 05:00:18.606535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606545 | orchestrator | 2026-03-29 05:00:18.606555 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 05:00:18.606565 | orchestrator | Sunday 29 March 2026 05:00:10 +0000 (0:00:01.088) 0:05:37.316 ********** 2026-03-29 05:00:18.606575 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606585 | orchestrator | 2026-03-29 05:00:18.606595 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:00:18.606605 | orchestrator | Sunday 29 March 2026 05:00:11 +0000 (0:00:01.080) 0:05:38.397 ********** 2026-03-29 05:00:18.606616 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606626 | orchestrator | 2026-03-29 05:00:18.606636 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 05:00:18.606646 | orchestrator | Sunday 29 March 2026 05:00:12 +0000 (0:00:00.903) 0:05:39.301 ********** 2026-03-29 05:00:18.606656 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606666 | orchestrator | 2026-03-29 05:00:18.606676 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 05:00:18.606687 | orchestrator | Sunday 29 March 2026 05:00:13 +0000 (0:00:00.902) 0:05:40.203 ********** 2026-03-29 05:00:18.606696 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606706 | orchestrator | 2026-03-29 05:00:18.606716 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 05:00:18.606727 | orchestrator | Sunday 29 March 2026 05:00:14 +0000 (0:00:00.888) 0:05:41.092 ********** 2026-03-29 05:00:18.606738 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606748 | orchestrator | 2026-03-29 05:00:18.606759 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 05:00:18.606769 | orchestrator | Sunday 29 March 2026 05:00:14 +0000 (0:00:00.878) 0:05:41.971 ********** 2026-03-29 05:00:18.606779 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606789 | orchestrator | 2026-03-29 05:00:18.606800 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 05:00:18.606810 | orchestrator | Sunday 29 March 2026 05:00:15 +0000 (0:00:00.889) 0:05:42.860 ********** 2026-03-29 05:00:18.606820 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606831 | orchestrator | 2026-03-29 05:00:18.606841 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 05:00:18.606860 | orchestrator | Sunday 29 March 2026 05:00:16 +0000 (0:00:00.904) 0:05:43.764 ********** 2026-03-29 05:00:18.606870 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:18.606880 | orchestrator | 2026-03-29 05:00:18.606890 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 05:00:18.606900 | orchestrator | Sunday 29 March 2026 05:00:17 +0000 (0:00:00.875) 0:05:44.640 ********** 2026-03-29 05:00:18.606912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.606940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.606972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.606984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:00:18.606996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.607007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.607017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:18.607041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:00:19.768650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:19.768763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:00:19.768780 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:00:19.768792 | orchestrator | 2026-03-29 05:00:19.768802 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 05:00:19.768813 | orchestrator | Sunday 29 March 2026 05:00:18 +0000 (0:00:00.996) 0:05:45.636 ********** 2026-03-29 05:00:19.768825 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768871 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768926 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768937 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.768955 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.769021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:00:19.769072 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:01:13.316985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:01:13.317103 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317121 | orchestrator | 2026-03-29 05:01:13.317134 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 05:01:13.317146 | orchestrator | Sunday 29 March 2026 05:00:19 +0000 (0:00:01.162) 0:05:46.799 ********** 2026-03-29 05:01:13.317157 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:01:13.317169 | orchestrator | 2026-03-29 05:01:13.317180 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 05:01:13.317191 | orchestrator | Sunday 29 March 2026 05:00:21 +0000 (0:00:01.284) 0:05:48.084 ********** 2026-03-29 05:01:13.317242 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:01:13.317253 | orchestrator | 2026-03-29 05:01:13.317264 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:01:13.317300 | orchestrator | Sunday 29 March 2026 05:00:22 +0000 (0:00:01.083) 0:05:49.168 ********** 2026-03-29 05:01:13.317312 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:01:13.317323 | orchestrator | 2026-03-29 05:01:13.317334 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:01:13.317345 | orchestrator | Sunday 29 March 2026 05:00:23 +0000 (0:00:01.359) 0:05:50.527 ********** 2026-03-29 05:01:13.317355 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317366 | orchestrator | 2026-03-29 05:01:13.317378 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:01:13.317388 | orchestrator | Sunday 29 March 2026 05:00:24 +0000 (0:00:01.092) 0:05:51.620 ********** 2026-03-29 05:01:13.317399 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317410 | orchestrator | 2026-03-29 05:01:13.317421 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:01:13.317432 | orchestrator | Sunday 29 March 2026 05:00:25 +0000 (0:00:01.208) 0:05:52.829 ********** 2026-03-29 05:01:13.317443 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317454 | orchestrator | 2026-03-29 05:01:13.317466 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 05:01:13.317476 | orchestrator | Sunday 29 March 2026 05:00:26 +0000 (0:00:01.122) 0:05:53.951 ********** 2026-03-29 05:01:13.317487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.317499 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 05:01:13.317509 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 05:01:13.317522 | orchestrator | 2026-03-29 05:01:13.317535 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 05:01:13.317548 | orchestrator | Sunday 29 March 2026 05:00:28 +0000 (0:00:01.878) 0:05:55.830 ********** 2026-03-29 05:01:13.317561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:01:13.317576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:01:13.317588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:01:13.317601 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317616 | orchestrator | 2026-03-29 05:01:13.317634 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 05:01:13.317654 | orchestrator | Sunday 29 March 2026 05:00:29 +0000 (0:00:01.210) 0:05:57.041 ********** 2026-03-29 05:01:13.317672 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.317692 | orchestrator | 2026-03-29 05:01:13.317706 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 05:01:13.317717 | orchestrator | Sunday 29 March 2026 05:00:31 +0000 (0:00:01.138) 0:05:58.179 ********** 2026-03-29 05:01:13.317728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.317739 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:01:13.317750 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:01:13.317775 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:01:13.317787 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:01:13.317797 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:01:13.317808 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:01:13.317819 | orchestrator | 2026-03-29 05:01:13.317829 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 05:01:13.317840 | orchestrator | Sunday 29 March 2026 05:00:33 +0000 (0:00:02.008) 0:06:00.188 ********** 2026-03-29 05:01:13.317851 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.317862 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:01:13.317872 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:01:13.317893 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:01:13.317921 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:01:13.317933 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:01:13.317944 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:01:13.317955 | orchestrator | 2026-03-29 05:01:13.317966 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-29 05:01:13.317976 | orchestrator | Sunday 29 March 2026 05:00:35 +0000 (0:00:02.693) 0:06:02.882 ********** 2026-03-29 05:01:13.317987 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:01:13.317998 | orchestrator | 2026-03-29 05:01:13.318009 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-29 05:01:13.318084 | orchestrator | Sunday 29 March 2026 05:00:38 +0000 (0:00:02.281) 0:06:05.163 ********** 2026-03-29 05:01:13.318098 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.318109 | orchestrator | 2026-03-29 05:01:13.318120 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-29 05:01:13.318131 | orchestrator | Sunday 29 March 2026 05:00:39 +0000 (0:00:01.197) 0:06:06.361 ********** 2026-03-29 05:01:13.318142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.318152 | orchestrator | 2026-03-29 05:01:13.318164 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-29 05:01:13.318174 | orchestrator | Sunday 29 March 2026 05:00:40 +0000 (0:00:01.114) 0:06:07.476 ********** 2026-03-29 05:01:13.318185 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:01:13.318240 | orchestrator | 2026-03-29 05:01:13.318261 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-29 05:01:13.318282 | orchestrator | Sunday 29 March 2026 05:00:42 +0000 (0:00:02.199) 0:06:09.676 ********** 2026-03-29 05:01:13.318300 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.318315 | orchestrator | 2026-03-29 05:01:13.318326 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-29 05:01:13.318337 | orchestrator | Sunday 29 March 2026 05:00:43 +0000 (0:00:01.123) 0:06:10.800 ********** 2026-03-29 05:01:13.318347 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.318358 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:01:13.318369 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:01:13.318380 | orchestrator | 2026-03-29 05:01:13.318391 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-29 05:01:13.318402 | orchestrator | Sunday 29 March 2026 05:00:46 +0000 (0:00:02.532) 0:06:13.332 ********** 2026-03-29 05:01:13.318412 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-29 05:01:13.318423 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-29 05:01:13.318435 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-29 05:01:13.318446 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-29 05:01:13.318457 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-29 05:01:13.318469 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-29 05:01:13.318480 | orchestrator | 2026-03-29 05:01:13.318490 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-29 05:01:13.318501 | orchestrator | Sunday 29 March 2026 05:01:00 +0000 (0:00:13.812) 0:06:27.145 ********** 2026-03-29 05:01:13.318512 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.318532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:01:13.318543 | orchestrator | 2026-03-29 05:01:13.318554 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-29 05:01:13.318565 | orchestrator | Sunday 29 March 2026 05:01:03 +0000 (0:00:03.867) 0:06:31.013 ********** 2026-03-29 05:01:13.318576 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:01:13.318587 | orchestrator | 2026-03-29 05:01:13.318598 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 05:01:13.318609 | orchestrator | Sunday 29 March 2026 05:01:06 +0000 (0:00:02.586) 0:06:33.599 ********** 2026-03-29 05:01:13.318619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-29 05:01:13.318631 | orchestrator | 2026-03-29 05:01:13.318648 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 05:01:13.318659 | orchestrator | Sunday 29 March 2026 05:01:07 +0000 (0:00:01.435) 0:06:35.034 ********** 2026-03-29 05:01:13.318670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-29 05:01:13.318681 | orchestrator | 2026-03-29 05:01:13.318692 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 05:01:13.318703 | orchestrator | Sunday 29 March 2026 05:01:09 +0000 (0:00:01.559) 0:06:36.594 ********** 2026-03-29 05:01:13.318713 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:01:13.318724 | orchestrator | 2026-03-29 05:01:13.318735 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 05:01:13.318746 | orchestrator | Sunday 29 March 2026 05:01:11 +0000 (0:00:01.531) 0:06:38.126 ********** 2026-03-29 05:01:13.318757 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.318768 | orchestrator | 2026-03-29 05:01:13.318778 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 05:01:13.318789 | orchestrator | Sunday 29 March 2026 05:01:12 +0000 (0:00:01.125) 0:06:39.251 ********** 2026-03-29 05:01:13.318800 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:01:13.318811 | orchestrator | 2026-03-29 05:01:13.318832 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 05:02:04.127166 | orchestrator | Sunday 29 March 2026 05:01:13 +0000 (0:00:01.094) 0:06:40.345 ********** 2026-03-29 05:02:04.127318 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127334 | orchestrator | 2026-03-29 05:02:04.127344 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 05:02:04.127353 | orchestrator | Sunday 29 March 2026 05:01:14 +0000 (0:00:01.108) 0:06:41.454 ********** 2026-03-29 05:02:04.127362 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127372 | orchestrator | 2026-03-29 05:02:04.127380 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 05:02:04.127388 | orchestrator | Sunday 29 March 2026 05:01:15 +0000 (0:00:01.525) 0:06:42.980 ********** 2026-03-29 05:02:04.127397 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127406 | orchestrator | 2026-03-29 05:02:04.127415 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 05:02:04.127423 | orchestrator | Sunday 29 March 2026 05:01:17 +0000 (0:00:01.115) 0:06:44.096 ********** 2026-03-29 05:02:04.127431 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127439 | orchestrator | 2026-03-29 05:02:04.127449 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 05:02:04.127458 | orchestrator | Sunday 29 March 2026 05:01:18 +0000 (0:00:01.152) 0:06:45.248 ********** 2026-03-29 05:02:04.127467 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127476 | orchestrator | 2026-03-29 05:02:04.127485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 05:02:04.127494 | orchestrator | Sunday 29 March 2026 05:01:19 +0000 (0:00:01.615) 0:06:46.863 ********** 2026-03-29 05:02:04.127503 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127512 | orchestrator | 2026-03-29 05:02:04.127520 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 05:02:04.127550 | orchestrator | Sunday 29 March 2026 05:01:21 +0000 (0:00:01.503) 0:06:48.367 ********** 2026-03-29 05:02:04.127556 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127561 | orchestrator | 2026-03-29 05:02:04.127566 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 05:02:04.127571 | orchestrator | Sunday 29 March 2026 05:01:22 +0000 (0:00:01.119) 0:06:49.487 ********** 2026-03-29 05:02:04.127577 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127582 | orchestrator | 2026-03-29 05:02:04.127587 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 05:02:04.127593 | orchestrator | Sunday 29 March 2026 05:01:23 +0000 (0:00:01.147) 0:06:50.635 ********** 2026-03-29 05:02:04.127598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127603 | orchestrator | 2026-03-29 05:02:04.127608 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 05:02:04.127613 | orchestrator | Sunday 29 March 2026 05:01:24 +0000 (0:00:01.139) 0:06:51.774 ********** 2026-03-29 05:02:04.127618 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127623 | orchestrator | 2026-03-29 05:02:04.127628 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 05:02:04.127633 | orchestrator | Sunday 29 March 2026 05:01:25 +0000 (0:00:01.139) 0:06:52.914 ********** 2026-03-29 05:02:04.127638 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127643 | orchestrator | 2026-03-29 05:02:04.127648 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 05:02:04.127653 | orchestrator | Sunday 29 March 2026 05:01:27 +0000 (0:00:01.131) 0:06:54.045 ********** 2026-03-29 05:02:04.127658 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127663 | orchestrator | 2026-03-29 05:02:04.127668 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 05:02:04.127673 | orchestrator | Sunday 29 March 2026 05:01:28 +0000 (0:00:01.126) 0:06:55.172 ********** 2026-03-29 05:02:04.127678 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127683 | orchestrator | 2026-03-29 05:02:04.127689 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 05:02:04.127694 | orchestrator | Sunday 29 March 2026 05:01:29 +0000 (0:00:01.096) 0:06:56.268 ********** 2026-03-29 05:02:04.127699 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127704 | orchestrator | 2026-03-29 05:02:04.127709 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 05:02:04.127714 | orchestrator | Sunday 29 March 2026 05:01:30 +0000 (0:00:01.108) 0:06:57.377 ********** 2026-03-29 05:02:04.127719 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127724 | orchestrator | 2026-03-29 05:02:04.127729 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 05:02:04.127735 | orchestrator | Sunday 29 March 2026 05:01:31 +0000 (0:00:01.155) 0:06:58.533 ********** 2026-03-29 05:02:04.127741 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.127747 | orchestrator | 2026-03-29 05:02:04.127764 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-29 05:02:04.127770 | orchestrator | Sunday 29 March 2026 05:01:32 +0000 (0:00:01.113) 0:06:59.647 ********** 2026-03-29 05:02:04.127776 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127782 | orchestrator | 2026-03-29 05:02:04.127788 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-29 05:02:04.127794 | orchestrator | Sunday 29 March 2026 05:01:33 +0000 (0:00:01.122) 0:07:00.769 ********** 2026-03-29 05:02:04.127800 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127806 | orchestrator | 2026-03-29 05:02:04.127812 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-29 05:02:04.127819 | orchestrator | Sunday 29 March 2026 05:01:34 +0000 (0:00:01.107) 0:07:01.877 ********** 2026-03-29 05:02:04.127828 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127835 | orchestrator | 2026-03-29 05:02:04.127854 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-29 05:02:04.127865 | orchestrator | Sunday 29 March 2026 05:01:35 +0000 (0:00:01.127) 0:07:03.005 ********** 2026-03-29 05:02:04.127873 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127881 | orchestrator | 2026-03-29 05:02:04.127889 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-29 05:02:04.127898 | orchestrator | Sunday 29 March 2026 05:01:37 +0000 (0:00:01.164) 0:07:04.169 ********** 2026-03-29 05:02:04.127924 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127933 | orchestrator | 2026-03-29 05:02:04.127940 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-29 05:02:04.127948 | orchestrator | Sunday 29 March 2026 05:01:38 +0000 (0:00:01.173) 0:07:05.342 ********** 2026-03-29 05:02:04.127956 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127964 | orchestrator | 2026-03-29 05:02:04.127971 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-29 05:02:04.127980 | orchestrator | Sunday 29 March 2026 05:01:39 +0000 (0:00:01.096) 0:07:06.439 ********** 2026-03-29 05:02:04.127987 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.127995 | orchestrator | 2026-03-29 05:02:04.128003 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-29 05:02:04.128013 | orchestrator | Sunday 29 March 2026 05:01:40 +0000 (0:00:01.115) 0:07:07.555 ********** 2026-03-29 05:02:04.128022 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128029 | orchestrator | 2026-03-29 05:02:04.128038 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-29 05:02:04.128047 | orchestrator | Sunday 29 March 2026 05:01:41 +0000 (0:00:01.110) 0:07:08.665 ********** 2026-03-29 05:02:04.128055 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128063 | orchestrator | 2026-03-29 05:02:04.128072 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-29 05:02:04.128081 | orchestrator | Sunday 29 March 2026 05:01:42 +0000 (0:00:01.120) 0:07:09.785 ********** 2026-03-29 05:02:04.128089 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128097 | orchestrator | 2026-03-29 05:02:04.128105 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-29 05:02:04.128113 | orchestrator | Sunday 29 March 2026 05:01:43 +0000 (0:00:01.113) 0:07:10.899 ********** 2026-03-29 05:02:04.128121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128130 | orchestrator | 2026-03-29 05:02:04.128138 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-29 05:02:04.128146 | orchestrator | Sunday 29 March 2026 05:01:44 +0000 (0:00:01.119) 0:07:12.019 ********** 2026-03-29 05:02:04.128155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128163 | orchestrator | 2026-03-29 05:02:04.128171 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-29 05:02:04.128179 | orchestrator | Sunday 29 March 2026 05:01:46 +0000 (0:00:01.146) 0:07:13.166 ********** 2026-03-29 05:02:04.128187 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.128196 | orchestrator | 2026-03-29 05:02:04.128205 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-29 05:02:04.128213 | orchestrator | Sunday 29 March 2026 05:01:48 +0000 (0:00:01.967) 0:07:15.133 ********** 2026-03-29 05:02:04.128220 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.128228 | orchestrator | 2026-03-29 05:02:04.128259 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-29 05:02:04.128268 | orchestrator | Sunday 29 March 2026 05:01:50 +0000 (0:00:02.428) 0:07:17.563 ********** 2026-03-29 05:02:04.128277 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-29 05:02:04.128287 | orchestrator | 2026-03-29 05:02:04.128295 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-29 05:02:04.128303 | orchestrator | Sunday 29 March 2026 05:01:51 +0000 (0:00:01.440) 0:07:19.003 ********** 2026-03-29 05:02:04.128311 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128330 | orchestrator | 2026-03-29 05:02:04.128339 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-29 05:02:04.128347 | orchestrator | Sunday 29 March 2026 05:01:53 +0000 (0:00:01.102) 0:07:20.106 ********** 2026-03-29 05:02:04.128356 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128364 | orchestrator | 2026-03-29 05:02:04.128373 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-29 05:02:04.128382 | orchestrator | Sunday 29 March 2026 05:01:54 +0000 (0:00:01.095) 0:07:21.202 ********** 2026-03-29 05:02:04.128391 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 05:02:04.128397 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 05:02:04.128402 | orchestrator | 2026-03-29 05:02:04.128408 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-29 05:02:04.128413 | orchestrator | Sunday 29 March 2026 05:01:55 +0000 (0:00:01.833) 0:07:23.035 ********** 2026-03-29 05:02:04.128418 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.128423 | orchestrator | 2026-03-29 05:02:04.128428 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-29 05:02:04.128439 | orchestrator | Sunday 29 March 2026 05:01:57 +0000 (0:00:01.614) 0:07:24.650 ********** 2026-03-29 05:02:04.128444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128449 | orchestrator | 2026-03-29 05:02:04.128454 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-29 05:02:04.128459 | orchestrator | Sunday 29 March 2026 05:01:58 +0000 (0:00:01.136) 0:07:25.786 ********** 2026-03-29 05:02:04.128465 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128469 | orchestrator | 2026-03-29 05:02:04.128475 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-29 05:02:04.128480 | orchestrator | Sunday 29 March 2026 05:01:59 +0000 (0:00:01.107) 0:07:26.894 ********** 2026-03-29 05:02:04.128485 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:04.128490 | orchestrator | 2026-03-29 05:02:04.128495 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-29 05:02:04.128500 | orchestrator | Sunday 29 March 2026 05:02:00 +0000 (0:00:01.129) 0:07:28.023 ********** 2026-03-29 05:02:04.128505 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-29 05:02:04.128510 | orchestrator | 2026-03-29 05:02:04.128515 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-29 05:02:04.128520 | orchestrator | Sunday 29 March 2026 05:02:02 +0000 (0:00:01.436) 0:07:29.459 ********** 2026-03-29 05:02:04.128526 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:04.128531 | orchestrator | 2026-03-29 05:02:04.128544 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-29 05:02:50.599567 | orchestrator | Sunday 29 March 2026 05:02:04 +0000 (0:00:01.700) 0:07:31.159 ********** 2026-03-29 05:02:50.599650 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 05:02:50.599657 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 05:02:50.599662 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 05:02:50.599666 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599672 | orchestrator | 2026-03-29 05:02:50.599677 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-29 05:02:50.599681 | orchestrator | Sunday 29 March 2026 05:02:05 +0000 (0:00:01.241) 0:07:32.401 ********** 2026-03-29 05:02:50.599685 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599688 | orchestrator | 2026-03-29 05:02:50.599692 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-29 05:02:50.599697 | orchestrator | Sunday 29 March 2026 05:02:06 +0000 (0:00:01.135) 0:07:33.537 ********** 2026-03-29 05:02:50.599701 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599721 | orchestrator | 2026-03-29 05:02:50.599725 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-29 05:02:50.599729 | orchestrator | Sunday 29 March 2026 05:02:07 +0000 (0:00:01.120) 0:07:34.657 ********** 2026-03-29 05:02:50.599733 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599737 | orchestrator | 2026-03-29 05:02:50.599741 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-29 05:02:50.599745 | orchestrator | Sunday 29 March 2026 05:02:08 +0000 (0:00:01.196) 0:07:35.853 ********** 2026-03-29 05:02:50.599748 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599752 | orchestrator | 2026-03-29 05:02:50.599756 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-29 05:02:50.599760 | orchestrator | Sunday 29 March 2026 05:02:09 +0000 (0:00:01.145) 0:07:36.999 ********** 2026-03-29 05:02:50.599764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599767 | orchestrator | 2026-03-29 05:02:50.599771 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-29 05:02:50.599775 | orchestrator | Sunday 29 March 2026 05:02:11 +0000 (0:00:01.153) 0:07:38.153 ********** 2026-03-29 05:02:50.599779 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:50.599784 | orchestrator | 2026-03-29 05:02:50.599788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-29 05:02:50.599791 | orchestrator | Sunday 29 March 2026 05:02:13 +0000 (0:00:02.590) 0:07:40.743 ********** 2026-03-29 05:02:50.599795 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:50.599799 | orchestrator | 2026-03-29 05:02:50.599803 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-29 05:02:50.599806 | orchestrator | Sunday 29 March 2026 05:02:14 +0000 (0:00:01.110) 0:07:41.854 ********** 2026-03-29 05:02:50.599810 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-29 05:02:50.599814 | orchestrator | 2026-03-29 05:02:50.599818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-29 05:02:50.599822 | orchestrator | Sunday 29 March 2026 05:02:16 +0000 (0:00:01.453) 0:07:43.307 ********** 2026-03-29 05:02:50.599825 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599829 | orchestrator | 2026-03-29 05:02:50.599833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-29 05:02:50.599837 | orchestrator | Sunday 29 March 2026 05:02:17 +0000 (0:00:01.107) 0:07:44.414 ********** 2026-03-29 05:02:50.599841 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599844 | orchestrator | 2026-03-29 05:02:50.599848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-29 05:02:50.599852 | orchestrator | Sunday 29 March 2026 05:02:18 +0000 (0:00:01.125) 0:07:45.540 ********** 2026-03-29 05:02:50.599856 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599859 | orchestrator | 2026-03-29 05:02:50.599863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-29 05:02:50.599867 | orchestrator | Sunday 29 March 2026 05:02:19 +0000 (0:00:01.106) 0:07:46.646 ********** 2026-03-29 05:02:50.599870 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599874 | orchestrator | 2026-03-29 05:02:50.599878 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-29 05:02:50.599882 | orchestrator | Sunday 29 March 2026 05:02:20 +0000 (0:00:01.152) 0:07:47.799 ********** 2026-03-29 05:02:50.599885 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599889 | orchestrator | 2026-03-29 05:02:50.599904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-29 05:02:50.599908 | orchestrator | Sunday 29 March 2026 05:02:21 +0000 (0:00:01.128) 0:07:48.927 ********** 2026-03-29 05:02:50.599912 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599915 | orchestrator | 2026-03-29 05:02:50.599919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-29 05:02:50.599923 | orchestrator | Sunday 29 March 2026 05:02:22 +0000 (0:00:01.111) 0:07:50.039 ********** 2026-03-29 05:02:50.599931 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599935 | orchestrator | 2026-03-29 05:02:50.599938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-29 05:02:50.599942 | orchestrator | Sunday 29 March 2026 05:02:24 +0000 (0:00:01.131) 0:07:51.171 ********** 2026-03-29 05:02:50.599946 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.599949 | orchestrator | 2026-03-29 05:02:50.599953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-29 05:02:50.599957 | orchestrator | Sunday 29 March 2026 05:02:25 +0000 (0:00:01.166) 0:07:52.337 ********** 2026-03-29 05:02:50.599961 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:02:50.599964 | orchestrator | 2026-03-29 05:02:50.599968 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-29 05:02:50.599972 | orchestrator | Sunday 29 March 2026 05:02:26 +0000 (0:00:01.129) 0:07:53.467 ********** 2026-03-29 05:02:50.599976 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-29 05:02:50.599980 | orchestrator | 2026-03-29 05:02:50.599994 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-29 05:02:50.599998 | orchestrator | Sunday 29 March 2026 05:02:27 +0000 (0:00:01.482) 0:07:54.950 ********** 2026-03-29 05:02:50.600002 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-29 05:02:50.600006 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-29 05:02:50.600010 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-29 05:02:50.600013 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-29 05:02:50.600017 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-29 05:02:50.600021 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-29 05:02:50.600025 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-29 05:02:50.600028 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-29 05:02:50.600033 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 05:02:50.600036 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 05:02:50.600040 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 05:02:50.600044 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 05:02:50.600048 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 05:02:50.600052 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 05:02:50.600055 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-29 05:02:50.600059 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-29 05:02:50.600063 | orchestrator | 2026-03-29 05:02:50.600067 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-29 05:02:50.600070 | orchestrator | Sunday 29 March 2026 05:02:34 +0000 (0:00:06.848) 0:08:01.798 ********** 2026-03-29 05:02:50.600074 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600078 | orchestrator | 2026-03-29 05:02:50.600082 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-29 05:02:50.600085 | orchestrator | Sunday 29 March 2026 05:02:35 +0000 (0:00:01.135) 0:08:02.933 ********** 2026-03-29 05:02:50.600089 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600093 | orchestrator | 2026-03-29 05:02:50.600097 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-29 05:02:50.600101 | orchestrator | Sunday 29 March 2026 05:02:37 +0000 (0:00:01.150) 0:08:04.084 ********** 2026-03-29 05:02:50.600104 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600108 | orchestrator | 2026-03-29 05:02:50.600112 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-29 05:02:50.600116 | orchestrator | Sunday 29 March 2026 05:02:38 +0000 (0:00:01.126) 0:08:05.211 ********** 2026-03-29 05:02:50.600121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600125 | orchestrator | 2026-03-29 05:02:50.600133 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-29 05:02:50.600138 | orchestrator | Sunday 29 March 2026 05:02:39 +0000 (0:00:01.114) 0:08:06.325 ********** 2026-03-29 05:02:50.600142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600146 | orchestrator | 2026-03-29 05:02:50.600150 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-29 05:02:50.600155 | orchestrator | Sunday 29 March 2026 05:02:40 +0000 (0:00:01.120) 0:08:07.445 ********** 2026-03-29 05:02:50.600159 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600164 | orchestrator | 2026-03-29 05:02:50.600168 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-29 05:02:50.600173 | orchestrator | Sunday 29 March 2026 05:02:41 +0000 (0:00:01.109) 0:08:08.555 ********** 2026-03-29 05:02:50.600177 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600181 | orchestrator | 2026-03-29 05:02:50.600186 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-29 05:02:50.600190 | orchestrator | Sunday 29 March 2026 05:02:42 +0000 (0:00:01.144) 0:08:09.700 ********** 2026-03-29 05:02:50.600195 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600199 | orchestrator | 2026-03-29 05:02:50.600204 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-29 05:02:50.600208 | orchestrator | Sunday 29 March 2026 05:02:43 +0000 (0:00:01.097) 0:08:10.797 ********** 2026-03-29 05:02:50.600213 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600217 | orchestrator | 2026-03-29 05:02:50.600225 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-29 05:02:50.600229 | orchestrator | Sunday 29 March 2026 05:02:44 +0000 (0:00:01.109) 0:08:11.907 ********** 2026-03-29 05:02:50.600233 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600238 | orchestrator | 2026-03-29 05:02:50.600278 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-29 05:02:50.600283 | orchestrator | Sunday 29 March 2026 05:02:46 +0000 (0:00:01.155) 0:08:13.063 ********** 2026-03-29 05:02:50.600288 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600292 | orchestrator | 2026-03-29 05:02:50.600297 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-29 05:02:50.600301 | orchestrator | Sunday 29 March 2026 05:02:47 +0000 (0:00:01.110) 0:08:14.173 ********** 2026-03-29 05:02:50.600306 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600310 | orchestrator | 2026-03-29 05:02:50.600314 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-29 05:02:50.600319 | orchestrator | Sunday 29 March 2026 05:02:48 +0000 (0:00:01.097) 0:08:15.270 ********** 2026-03-29 05:02:50.600323 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600328 | orchestrator | 2026-03-29 05:02:50.600332 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-29 05:02:50.600337 | orchestrator | Sunday 29 March 2026 05:02:49 +0000 (0:00:01.237) 0:08:16.508 ********** 2026-03-29 05:02:50.600342 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:02:50.600346 | orchestrator | 2026-03-29 05:02:50.600354 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-29 05:03:44.301750 | orchestrator | Sunday 29 March 2026 05:02:50 +0000 (0:00:01.121) 0:08:17.629 ********** 2026-03-29 05:03:44.301904 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.301936 | orchestrator | 2026-03-29 05:03:44.301957 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-29 05:03:44.301976 | orchestrator | Sunday 29 March 2026 05:02:51 +0000 (0:00:01.194) 0:08:18.824 ********** 2026-03-29 05:03:44.301995 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302097 | orchestrator | 2026-03-29 05:03:44.302123 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-29 05:03:44.302142 | orchestrator | Sunday 29 March 2026 05:02:52 +0000 (0:00:01.111) 0:08:19.936 ********** 2026-03-29 05:03:44.302213 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302234 | orchestrator | 2026-03-29 05:03:44.302323 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 05:03:44.302352 | orchestrator | Sunday 29 March 2026 05:02:54 +0000 (0:00:01.108) 0:08:21.045 ********** 2026-03-29 05:03:44.302371 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302390 | orchestrator | 2026-03-29 05:03:44.302411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 05:03:44.302430 | orchestrator | Sunday 29 March 2026 05:02:55 +0000 (0:00:01.118) 0:08:22.164 ********** 2026-03-29 05:03:44.302450 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302469 | orchestrator | 2026-03-29 05:03:44.302489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 05:03:44.302509 | orchestrator | Sunday 29 March 2026 05:02:56 +0000 (0:00:01.145) 0:08:23.309 ********** 2026-03-29 05:03:44.302529 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302550 | orchestrator | 2026-03-29 05:03:44.302568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 05:03:44.302588 | orchestrator | Sunday 29 March 2026 05:02:57 +0000 (0:00:01.113) 0:08:24.423 ********** 2026-03-29 05:03:44.302604 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302617 | orchestrator | 2026-03-29 05:03:44.302630 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 05:03:44.302642 | orchestrator | Sunday 29 March 2026 05:02:58 +0000 (0:00:01.161) 0:08:25.585 ********** 2026-03-29 05:03:44.302652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:03:44.302664 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:03:44.302674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:03:44.302685 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302696 | orchestrator | 2026-03-29 05:03:44.302706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 05:03:44.302717 | orchestrator | Sunday 29 March 2026 05:02:59 +0000 (0:00:01.379) 0:08:26.964 ********** 2026-03-29 05:03:44.302728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:03:44.302739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:03:44.302750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:03:44.302760 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302771 | orchestrator | 2026-03-29 05:03:44.302782 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 05:03:44.302839 | orchestrator | Sunday 29 March 2026 05:03:01 +0000 (0:00:01.409) 0:08:28.374 ********** 2026-03-29 05:03:44.302875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:03:44.302894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:03:44.302914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:03:44.302933 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.302951 | orchestrator | 2026-03-29 05:03:44.302970 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 05:03:44.302989 | orchestrator | Sunday 29 March 2026 05:03:02 +0000 (0:00:01.397) 0:08:29.771 ********** 2026-03-29 05:03:44.303007 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.303024 | orchestrator | 2026-03-29 05:03:44.303040 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 05:03:44.303058 | orchestrator | Sunday 29 March 2026 05:03:03 +0000 (0:00:01.131) 0:08:30.903 ********** 2026-03-29 05:03:44.303075 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-29 05:03:44.303094 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.303114 | orchestrator | 2026-03-29 05:03:44.303151 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-29 05:03:44.303171 | orchestrator | Sunday 29 March 2026 05:03:05 +0000 (0:00:01.290) 0:08:32.194 ********** 2026-03-29 05:03:44.303197 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:03:44.303208 | orchestrator | 2026-03-29 05:03:44.303219 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-29 05:03:44.303230 | orchestrator | Sunday 29 March 2026 05:03:06 +0000 (0:00:01.747) 0:08:33.942 ********** 2026-03-29 05:03:44.303241 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303277 | orchestrator | 2026-03-29 05:03:44.303289 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-29 05:03:44.303300 | orchestrator | Sunday 29 March 2026 05:03:08 +0000 (0:00:01.126) 0:08:35.069 ********** 2026-03-29 05:03:44.303311 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-29 05:03:44.303323 | orchestrator | 2026-03-29 05:03:44.303334 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-29 05:03:44.303344 | orchestrator | Sunday 29 March 2026 05:03:09 +0000 (0:00:01.487) 0:08:36.557 ********** 2026-03-29 05:03:44.303355 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:03:44.303367 | orchestrator | 2026-03-29 05:03:44.303378 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-29 05:03:44.303389 | orchestrator | Sunday 29 March 2026 05:03:12 +0000 (0:00:03.425) 0:08:39.982 ********** 2026-03-29 05:03:44.303400 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.303411 | orchestrator | 2026-03-29 05:03:44.303445 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-29 05:03:44.303457 | orchestrator | Sunday 29 March 2026 05:03:14 +0000 (0:00:01.165) 0:08:41.148 ********** 2026-03-29 05:03:44.303468 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303479 | orchestrator | 2026-03-29 05:03:44.303490 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-29 05:03:44.303500 | orchestrator | Sunday 29 March 2026 05:03:15 +0000 (0:00:01.133) 0:08:42.281 ********** 2026-03-29 05:03:44.303511 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303522 | orchestrator | 2026-03-29 05:03:44.303533 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-29 05:03:44.303544 | orchestrator | Sunday 29 March 2026 05:03:16 +0000 (0:00:01.145) 0:08:43.427 ********** 2026-03-29 05:03:44.303554 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:03:44.303565 | orchestrator | 2026-03-29 05:03:44.303576 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-29 05:03:44.303587 | orchestrator | Sunday 29 March 2026 05:03:18 +0000 (0:00:02.027) 0:08:45.455 ********** 2026-03-29 05:03:44.303597 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303608 | orchestrator | 2026-03-29 05:03:44.303619 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-29 05:03:44.303630 | orchestrator | Sunday 29 March 2026 05:03:19 +0000 (0:00:01.576) 0:08:47.031 ********** 2026-03-29 05:03:44.303641 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303652 | orchestrator | 2026-03-29 05:03:44.303662 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-29 05:03:44.303673 | orchestrator | Sunday 29 March 2026 05:03:21 +0000 (0:00:01.575) 0:08:48.607 ********** 2026-03-29 05:03:44.303684 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303695 | orchestrator | 2026-03-29 05:03:44.303705 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-29 05:03:44.303716 | orchestrator | Sunday 29 March 2026 05:03:23 +0000 (0:00:01.493) 0:08:50.100 ********** 2026-03-29 05:03:44.303727 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303738 | orchestrator | 2026-03-29 05:03:44.303749 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-29 05:03:44.303759 | orchestrator | Sunday 29 March 2026 05:03:24 +0000 (0:00:01.685) 0:08:51.786 ********** 2026-03-29 05:03:44.303770 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303781 | orchestrator | 2026-03-29 05:03:44.303792 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-29 05:03:44.303810 | orchestrator | Sunday 29 March 2026 05:03:26 +0000 (0:00:01.648) 0:08:53.434 ********** 2026-03-29 05:03:44.303821 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 05:03:44.303832 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 05:03:44.303844 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 05:03:44.303854 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-29 05:03:44.303865 | orchestrator | 2026-03-29 05:03:44.303876 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-29 05:03:44.303887 | orchestrator | Sunday 29 March 2026 05:03:30 +0000 (0:00:03.807) 0:08:57.242 ********** 2026-03-29 05:03:44.303898 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:03:44.303908 | orchestrator | 2026-03-29 05:03:44.303919 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-29 05:03:44.303930 | orchestrator | Sunday 29 March 2026 05:03:32 +0000 (0:00:02.038) 0:08:59.280 ********** 2026-03-29 05:03:44.303941 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.303952 | orchestrator | 2026-03-29 05:03:44.303962 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-29 05:03:44.303973 | orchestrator | Sunday 29 March 2026 05:03:33 +0000 (0:00:01.141) 0:09:00.421 ********** 2026-03-29 05:03:44.303984 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.304011 | orchestrator | 2026-03-29 05:03:44.304034 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-29 05:03:44.304045 | orchestrator | Sunday 29 March 2026 05:03:34 +0000 (0:00:01.135) 0:09:01.557 ********** 2026-03-29 05:03:44.304056 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.304067 | orchestrator | 2026-03-29 05:03:44.304078 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-29 05:03:44.304089 | orchestrator | Sunday 29 March 2026 05:03:36 +0000 (0:00:02.040) 0:09:03.598 ********** 2026-03-29 05:03:44.304099 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:03:44.304110 | orchestrator | 2026-03-29 05:03:44.304127 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-29 05:03:44.304138 | orchestrator | Sunday 29 March 2026 05:03:38 +0000 (0:00:01.468) 0:09:05.067 ********** 2026-03-29 05:03:44.304149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.304160 | orchestrator | 2026-03-29 05:03:44.304170 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-29 05:03:44.304181 | orchestrator | Sunday 29 March 2026 05:03:39 +0000 (0:00:01.097) 0:09:06.165 ********** 2026-03-29 05:03:44.304192 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-29 05:03:44.304203 | orchestrator | 2026-03-29 05:03:44.304214 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-29 05:03:44.304225 | orchestrator | Sunday 29 March 2026 05:03:40 +0000 (0:00:01.467) 0:09:07.633 ********** 2026-03-29 05:03:44.304235 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.304246 | orchestrator | 2026-03-29 05:03:44.304315 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-29 05:03:44.304327 | orchestrator | Sunday 29 March 2026 05:03:41 +0000 (0:00:01.101) 0:09:08.735 ********** 2026-03-29 05:03:44.304338 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:03:44.304349 | orchestrator | 2026-03-29 05:03:44.304360 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-29 05:03:44.304371 | orchestrator | Sunday 29 March 2026 05:03:42 +0000 (0:00:01.095) 0:09:09.830 ********** 2026-03-29 05:03:44.304382 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-29 05:03:44.304393 | orchestrator | 2026-03-29 05:03:44.304411 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-29 05:18:02.601032 | orchestrator | Sunday 29 March 2026 05:03:44 +0000 (0:00:01.501) 0:09:11.332 ********** 2026-03-29 05:18:02.601142 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:18:02.601157 | orchestrator | 2026-03-29 05:18:02.601190 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-29 05:18:02.601201 | orchestrator | Sunday 29 March 2026 05:03:46 +0000 (0:00:02.361) 0:09:13.693 ********** 2026-03-29 05:18:02.601211 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:18:02.601221 | orchestrator | 2026-03-29 05:18:02.601231 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-29 05:18:02.601241 | orchestrator | Sunday 29 March 2026 05:03:48 +0000 (0:00:01.962) 0:09:15.655 ********** 2026-03-29 05:18:02.601251 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:18:02.601261 | orchestrator | 2026-03-29 05:18:02.601270 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-29 05:18:02.601280 | orchestrator | Sunday 29 March 2026 05:03:51 +0000 (0:00:02.463) 0:09:18.119 ********** 2026-03-29 05:18:02.601290 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:18:02.601299 | orchestrator | 2026-03-29 05:18:02.601309 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-29 05:18:02.601318 | orchestrator | Sunday 29 March 2026 05:03:54 +0000 (0:00:03.253) 0:09:21.372 ********** 2026-03-29 05:18:02.601328 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-29 05:18:02.601338 | orchestrator | 2026-03-29 05:18:02.601348 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-29 05:18:02.601358 | orchestrator | Sunday 29 March 2026 05:03:55 +0000 (0:00:01.531) 0:09:22.904 ********** 2026-03-29 05:18:02.601367 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:18:02.601377 | orchestrator | 2026-03-29 05:18:02.601386 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-29 05:18:02.601396 | orchestrator | Sunday 29 March 2026 05:03:58 +0000 (0:00:02.277) 0:09:25.181 ********** 2026-03-29 05:18:02.601405 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:18:02.601415 | orchestrator | 2026-03-29 05:18:02.601425 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-29 05:18:02.601434 | orchestrator | Sunday 29 March 2026 05:04:01 +0000 (0:00:03.072) 0:09:28.254 ********** 2026-03-29 05:18:02.601444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:18:02.601453 | orchestrator | 2026-03-29 05:18:02.601463 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-29 05:18:02.601472 | orchestrator | Sunday 29 March 2026 05:04:02 +0000 (0:00:01.118) 0:09:29.373 ********** 2026-03-29 05:18:02.601485 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:18:02.601498 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:18:02.601508 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 05:18:02.601532 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 05:18:02.601544 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 05:18:02.601561 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cbbaef874043c14b1bedbaf8b378d164da25fe58'}])  2026-03-29 05:18:02.601572 | orchestrator | 2026-03-29 05:18:02.601599 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-29 05:18:02.601611 | orchestrator | Sunday 29 March 2026 05:04:12 +0000 (0:00:10.179) 0:09:39.552 ********** 2026-03-29 05:18:02.601623 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:18:02.601634 | orchestrator | 2026-03-29 05:18:02.601645 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 05:18:02.601656 | orchestrator | Sunday 29 March 2026 05:04:15 +0000 (0:00:02.595) 0:09:42.148 ********** 2026-03-29 05:18:02.601694 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:18:02.601706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 05:18:02.601717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 05:18:02.601729 | orchestrator | 2026-03-29 05:18:02.601740 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 05:18:02.601751 | orchestrator | Sunday 29 March 2026 05:04:17 +0000 (0:00:02.181) 0:09:44.329 ********** 2026-03-29 05:18:02.601762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:18:02.601773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:18:02.601784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:18:02.601796 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:18:02.601808 | orchestrator | 2026-03-29 05:18:02.601818 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-29 05:18:02.601827 | orchestrator | Sunday 29 March 2026 05:04:18 +0000 (0:00:01.414) 0:09:45.744 ********** 2026-03-29 05:18:02.601837 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:18:02.601846 | orchestrator | 2026-03-29 05:18:02.601856 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-29 05:18:02.601865 | orchestrator | Sunday 29 March 2026 05:04:19 +0000 (0:00:01.100) 0:09:46.844 ********** 2026-03-29 05:18:02.601875 | orchestrator | 2026-03-29 05:18:02.601885 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601895 | orchestrator | 2026-03-29 05:18:02.601905 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601914 | orchestrator | 2026-03-29 05:18:02.601924 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601933 | orchestrator | 2026-03-29 05:18:02.601943 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601952 | orchestrator | 2026-03-29 05:18:02.601962 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601971 | orchestrator | 2026-03-29 05:18:02.601981 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.601991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-03-29 05:18:02.602002 | orchestrator | 2026-03-29 05:18:02.602012 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602093 | orchestrator | 2026-03-29 05:18:02.602104 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602146 | orchestrator | 2026-03-29 05:18:02.602156 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602166 | orchestrator | 2026-03-29 05:18:02.602176 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602185 | orchestrator | 2026-03-29 05:18:02.602195 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602204 | orchestrator | 2026-03-29 05:18:02.602214 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602223 | orchestrator | 2026-03-29 05:18:02.602233 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602242 | orchestrator | 2026-03-29 05:18:02.602252 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602261 | orchestrator | 2026-03-29 05:18:02.602276 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602286 | orchestrator | 2026-03-29 05:18:02.602296 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602305 | orchestrator | 2026-03-29 05:18:02.602315 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602324 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-03-29 05:18:02.602334 | orchestrator | 2026-03-29 05:18:02.602343 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602353 | orchestrator | 2026-03-29 05:18:02.602363 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602372 | orchestrator | 2026-03-29 05:18:02.602382 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602391 | orchestrator | 2026-03-29 05:18:02.602400 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602410 | orchestrator | 2026-03-29 05:18:02.602419 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602429 | orchestrator | 2026-03-29 05:18:02.602438 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:18:02.602448 | orchestrator | 2026-03-29 05:18:02.602464 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604378 | orchestrator | 2026-03-29 05:35:46.604500 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604579 | orchestrator | 2026-03-29 05:35:46.604601 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604619 | orchestrator | 2026-03-29 05:35:46.604636 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604654 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-03-29 05:35:46.604672 | orchestrator | 2026-03-29 05:35:46.604689 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604707 | orchestrator | 2026-03-29 05:35:46.604724 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604771 | orchestrator | 2026-03-29 05:35:46.604790 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604810 | orchestrator | 2026-03-29 05:35:46.604822 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604833 | orchestrator | 2026-03-29 05:35:46.604844 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604855 | orchestrator | 2026-03-29 05:35:46.604866 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604877 | orchestrator | 2026-03-29 05:35:46.604887 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604898 | orchestrator | 2026-03-29 05:35:46.604910 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604923 | orchestrator | 2026-03-29 05:35:46.604936 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604948 | orchestrator | 2026-03-29 05:35:46.604961 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604973 | orchestrator | 2026-03-29 05:35:46.604986 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.604999 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-03-29 05:35:46.605012 | orchestrator | 2026-03-29 05:35:46.605024 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605037 | orchestrator | 2026-03-29 05:35:46.605050 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605062 | orchestrator | 2026-03-29 05:35:46.605074 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605086 | orchestrator | 2026-03-29 05:35:46.605099 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605111 | orchestrator | 2026-03-29 05:35:46.605124 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605137 | orchestrator | 2026-03-29 05:35:46.605150 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605162 | orchestrator | 2026-03-29 05:35:46.605175 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605187 | orchestrator | 2026-03-29 05:35:46.605200 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605212 | orchestrator | 2026-03-29 05:35:46.605225 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605238 | orchestrator | 2026-03-29 05:35:46.605250 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605276 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-03-29 05:35:46.605287 | orchestrator | 2026-03-29 05:35:46.605298 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605309 | orchestrator | 2026-03-29 05:35:46.605320 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605331 | orchestrator | 2026-03-29 05:35:46.605342 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605353 | orchestrator | 2026-03-29 05:35:46.605364 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605383 | orchestrator | 2026-03-29 05:35:46.605394 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605405 | orchestrator | 2026-03-29 05:35:46.605416 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605427 | orchestrator | 2026-03-29 05:35:46.605437 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605448 | orchestrator | 2026-03-29 05:35:46.605459 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605470 | orchestrator | 2026-03-29 05:35:46.605481 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605491 | orchestrator | 2026-03-29 05:35:46.605546 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605559 | orchestrator | 2026-03-29 05:35:46.605570 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 05:35:46.605584 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.247874", "end": "2026-03-29 05:35:38.581453", "msg": "non-zero return code", "rc": 1, "start": "2026-03-29 05:30:38.333579", "stderr": "2026-03-29T05:35:38.561+0000 710479bb5640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-03-29T05:35:38.561+0000 710479bb5640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-03-29 05:35:46.605599 | orchestrator | 2026-03-29 05:35:46.605610 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-03-29 05:35:46.605622 | orchestrator | Sunday 29 March 2026 05:35:40 +0000 (0:31:20.276) 0:41:07.121 ********** 2026-03-29 05:35:46.605633 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:35:46.605645 | orchestrator | 2026-03-29 05:35:46.605656 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-03-29 05:35:46.605667 | orchestrator | Sunday 29 March 2026 05:35:42 +0000 (0:00:02.168) 0:41:09.289 ********** 2026-03-29 05:35:46.605678 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:35:46.605689 | orchestrator | 2026-03-29 05:35:46.605700 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-03-29 05:35:46.605711 | orchestrator | Sunday 29 March 2026 05:35:44 +0000 (0:00:01.836) 0:41:11.126 ********** 2026-03-29 05:35:46.605723 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-03-29 05:35:46.605735 | orchestrator | 2026-03-29 05:35:46.605746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 05:35:46.605757 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 05:35:46.605768 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 05:35:46.605779 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-03-29 05:35:46.605792 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 05:35:46.605803 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 05:35:46.605822 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-03-29 05:35:46.605833 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-03-29 05:35:46.605849 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-03-29 05:35:46.605860 | orchestrator | 2026-03-29 05:35:46.605871 | orchestrator | 2026-03-29 05:35:46.605882 | orchestrator | 2026-03-29 05:35:46.605893 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 05:35:46.605903 | orchestrator | Sunday 29 March 2026 05:35:46 +0000 (0:00:02.490) 0:41:13.617 ********** 2026-03-29 05:35:46.605914 | orchestrator | =============================================================================== 2026-03-29 05:35:46.605925 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.28s 2026-03-29 05:35:46.605936 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.24s 2026-03-29 05:35:46.605947 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.81s 2026-03-29 05:35:46.605957 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.52s 2026-03-29 05:35:46.605968 | orchestrator | Set cluster configs ---------------------------------------------------- 10.52s 2026-03-29 05:35:46.605979 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.18s 2026-03-29 05:35:46.605990 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.85s 2026-03-29 05:35:46.606001 | orchestrator | Gather facts ------------------------------------------------------------ 5.31s 2026-03-29 05:35:46.606012 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 4.56s 2026-03-29 05:35:46.606098 | orchestrator | Stop ceph mon ----------------------------------------------------------- 3.87s 2026-03-29 05:35:46.606120 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.81s 2026-03-29 05:35:47.134471 | orchestrator | 2026-03-29 05:35:47 | INFO  | Task 042b0ede-159d-4297-81da-6620a9ae5fc6 (ceph-rolling_update) was prepared for execution. 2026-03-29 05:35:47.134639 | orchestrator | 2026-03-29 05:35:47 | INFO  | It takes a moment until task 042b0ede-159d-4297-81da-6620a9ae5fc6 (ceph-rolling_update) has been started and output is visible here. 2026-03-29 05:37:04.327783 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.43s 2026-03-29 05:37:04.327896 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.29s 2026-03-29 05:37:04.327911 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.25s 2026-03-29 05:37:04.327921 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.24s 2026-03-29 05:37:04.327930 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.16s 2026-03-29 05:37:04.327939 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 3.07s 2026-03-29 05:37:04.327948 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 2.98s 2026-03-29 05:37:04.327956 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.97s 2026-03-29 05:37:04.327965 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.88s 2026-03-29 05:37:04.327973 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 05:37:04.327983 | orchestrator | 2.16.14 2026-03-29 05:37:04.327994 | orchestrator | 2026-03-29 05:37:04.328002 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-29 05:37:04.328012 | orchestrator | 2026-03-29 05:37:04.328021 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-29 05:37:04.328051 | orchestrator | Sunday 29 March 2026 05:35:53 +0000 (0:00:01.737) 0:00:01.737 ********** 2026-03-29 05:37:04.328061 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-29 05:37:04.328071 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-29 05:37:04.328081 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-29 05:37:04.328089 | orchestrator | skipping: [localhost] 2026-03-29 05:37:04.328098 | orchestrator | 2026-03-29 05:37:04.328106 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-29 05:37:04.328115 | orchestrator | 2026-03-29 05:37:04.328124 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-29 05:37:04.328133 | orchestrator | Sunday 29 March 2026 05:35:55 +0000 (0:00:01.830) 0:00:03.567 ********** 2026-03-29 05:37:04.328141 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 05:37:04.328151 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328159 | orchestrator | } 2026-03-29 05:37:04.328167 | orchestrator | ok: [testbed-node-1] => { 2026-03-29 05:37:04.328176 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328185 | orchestrator | } 2026-03-29 05:37:04.328194 | orchestrator | ok: [testbed-node-2] => { 2026-03-29 05:37:04.328202 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328211 | orchestrator | } 2026-03-29 05:37:04.328219 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 05:37:04.328228 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328236 | orchestrator | } 2026-03-29 05:37:04.328245 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 05:37:04.328254 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328263 | orchestrator | } 2026-03-29 05:37:04.328271 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 05:37:04.328280 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328290 | orchestrator | } 2026-03-29 05:37:04.328298 | orchestrator | ok: [testbed-manager] => { 2026-03-29 05:37:04.328306 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-29 05:37:04.328315 | orchestrator | } 2026-03-29 05:37:04.328324 | orchestrator | 2026-03-29 05:37:04.328333 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-29 05:37:04.328357 | orchestrator | Sunday 29 March 2026 05:36:01 +0000 (0:00:05.687) 0:00:09.255 ********** 2026-03-29 05:37:04.328366 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:04.328375 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:04.328384 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:04.328392 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:04.328401 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:04.328410 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:04.328419 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.328427 | orchestrator | 2026-03-29 05:37:04.328436 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-29 05:37:04.328445 | orchestrator | Sunday 29 March 2026 05:36:07 +0000 (0:00:06.382) 0:00:15.637 ********** 2026-03-29 05:37:04.328454 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:37:04.328463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:37:04.328472 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:37:04.328481 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:37:04.328489 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:37:04.328497 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:37:04.328505 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:37:04.328523 | orchestrator | 2026-03-29 05:37:04.328531 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-29 05:37:04.328539 | orchestrator | Sunday 29 March 2026 05:36:40 +0000 (0:00:32.620) 0:00:48.258 ********** 2026-03-29 05:37:04.328547 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.328556 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.328564 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.328572 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.328580 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.328589 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.328598 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.328673 | orchestrator | 2026-03-29 05:37:04.328702 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 05:37:04.328712 | orchestrator | Sunday 29 March 2026 05:36:42 +0000 (0:00:02.025) 0:00:50.284 ********** 2026-03-29 05:37:04.328721 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 05:37:04.328732 | orchestrator | 2026-03-29 05:37:04.328741 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 05:37:04.328750 | orchestrator | Sunday 29 March 2026 05:36:45 +0000 (0:00:02.558) 0:00:52.842 ********** 2026-03-29 05:37:04.328758 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.328767 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.328776 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.328785 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.328793 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.328802 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.328810 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.328818 | orchestrator | 2026-03-29 05:37:04.328826 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 05:37:04.328834 | orchestrator | Sunday 29 March 2026 05:36:47 +0000 (0:00:02.581) 0:00:55.423 ********** 2026-03-29 05:37:04.328842 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.328850 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.328858 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.328867 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.328874 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.328884 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.328893 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.328902 | orchestrator | 2026-03-29 05:37:04.328911 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 05:37:04.328920 | orchestrator | Sunday 29 March 2026 05:36:49 +0000 (0:00:01.867) 0:00:57.291 ********** 2026-03-29 05:37:04.328929 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.328938 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.328947 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.328956 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.328966 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.328976 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.328985 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.328995 | orchestrator | 2026-03-29 05:37:04.329004 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 05:37:04.329014 | orchestrator | Sunday 29 March 2026 05:36:51 +0000 (0:00:02.390) 0:00:59.682 ********** 2026-03-29 05:37:04.329023 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.329032 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.329043 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.329052 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.329062 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.329071 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.329081 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.329091 | orchestrator | 2026-03-29 05:37:04.329101 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 05:37:04.329110 | orchestrator | Sunday 29 March 2026 05:36:53 +0000 (0:00:01.828) 0:01:01.510 ********** 2026-03-29 05:37:04.329130 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.329139 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.329149 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.329157 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.329166 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.329175 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.329184 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.329193 | orchestrator | 2026-03-29 05:37:04.329201 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 05:37:04.329210 | orchestrator | Sunday 29 March 2026 05:36:55 +0000 (0:00:02.046) 0:01:03.557 ********** 2026-03-29 05:37:04.329218 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.329227 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.329235 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.329243 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.329251 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.329259 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.329277 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.329286 | orchestrator | 2026-03-29 05:37:04.329294 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 05:37:04.329303 | orchestrator | Sunday 29 March 2026 05:36:57 +0000 (0:00:01.838) 0:01:05.396 ********** 2026-03-29 05:37:04.329311 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:04.329320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:04.329328 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:04.329337 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:04.329346 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:04.329354 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:04.329362 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:04.329371 | orchestrator | 2026-03-29 05:37:04.329379 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 05:37:04.329388 | orchestrator | Sunday 29 March 2026 05:36:59 +0000 (0:00:02.030) 0:01:07.426 ********** 2026-03-29 05:37:04.329396 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.329405 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.329413 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:04.329421 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:04.329429 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:04.329437 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:04.329446 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:04.329454 | orchestrator | 2026-03-29 05:37:04.329463 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 05:37:04.329471 | orchestrator | Sunday 29 March 2026 05:37:02 +0000 (0:00:02.363) 0:01:09.790 ********** 2026-03-29 05:37:04.329480 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:37:04.329489 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:37:04.329497 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:37:04.329505 | orchestrator | 2026-03-29 05:37:04.329513 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 05:37:04.329522 | orchestrator | Sunday 29 March 2026 05:37:03 +0000 (0:00:01.642) 0:01:11.433 ********** 2026-03-29 05:37:04.329530 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:04.329538 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:04.329558 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:27.617465 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:27.617567 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:27.617576 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:27.617584 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:27.617591 | orchestrator | 2026-03-29 05:37:27.617599 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 05:37:27.617607 | orchestrator | Sunday 29 March 2026 05:37:05 +0000 (0:00:01.979) 0:01:13.412 ********** 2026-03-29 05:37:27.617681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:37:27.617689 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:37:27.617697 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:37:27.617703 | orchestrator | 2026-03-29 05:37:27.617710 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 05:37:27.617716 | orchestrator | Sunday 29 March 2026 05:37:08 +0000 (0:00:03.187) 0:01:16.599 ********** 2026-03-29 05:37:27.617724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:37:27.617731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:37:27.617737 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:37:27.617744 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.617750 | orchestrator | 2026-03-29 05:37:27.617757 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 05:37:27.617765 | orchestrator | Sunday 29 March 2026 05:37:10 +0000 (0:00:01.345) 0:01:17.945 ********** 2026-03-29 05:37:27.617775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617792 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617798 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.617805 | orchestrator | 2026-03-29 05:37:27.617811 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 05:37:27.617817 | orchestrator | Sunday 29 March 2026 05:37:11 +0000 (0:00:01.736) 0:01:19.681 ********** 2026-03-29 05:37:27.617826 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617835 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:27.617847 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.617854 | orchestrator | 2026-03-29 05:37:27.617861 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 05:37:27.617867 | orchestrator | Sunday 29 March 2026 05:37:13 +0000 (0:00:01.123) 0:01:20.805 ********** 2026-03-29 05:37:27.617888 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a25d3bb21130', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 05:37:06.320937', 'end': '2026-03-29 05:37:06.364636', 'delta': '0:00:00.043699', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a25d3bb21130'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 05:37:27.617974 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a6db66d8015c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 05:37:07.066157', 'end': '2026-03-29 05:37:07.131766', 'delta': '0:00:00.065609', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6db66d8015c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 05:37:27.617986 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5a2b09aac491', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 05:37:07.642549', 'end': '2026-03-29 05:37:07.691733', 'delta': '0:00:00.049184', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5a2b09aac491'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 05:37:27.617993 | orchestrator | 2026-03-29 05:37:27.618000 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 05:37:27.618006 | orchestrator | Sunday 29 March 2026 05:37:14 +0000 (0:00:01.161) 0:01:21.967 ********** 2026-03-29 05:37:27.618064 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:27.618071 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:27.618077 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:27.618084 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:27.618090 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:27.618097 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:27.618103 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:27.618109 | orchestrator | 2026-03-29 05:37:27.618115 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 05:37:27.618122 | orchestrator | Sunday 29 March 2026 05:37:16 +0000 (0:00:02.207) 0:01:24.175 ********** 2026-03-29 05:37:27.618128 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.618134 | orchestrator | 2026-03-29 05:37:27.618141 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 05:37:27.618147 | orchestrator | Sunday 29 March 2026 05:37:17 +0000 (0:00:01.193) 0:01:25.369 ********** 2026-03-29 05:37:27.618153 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:27.618159 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:27.618166 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:27.618172 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:27.618178 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:27.618188 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:27.618195 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:27.618201 | orchestrator | 2026-03-29 05:37:27.618208 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 05:37:27.618214 | orchestrator | Sunday 29 March 2026 05:37:19 +0000 (0:00:02.097) 0:01:27.467 ********** 2026-03-29 05:37:27.618226 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:27.618232 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618239 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618252 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618259 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618265 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 05:37:27.618271 | orchestrator | 2026-03-29 05:37:27.618278 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:37:27.618284 | orchestrator | Sunday 29 March 2026 05:37:23 +0000 (0:00:03.465) 0:01:30.932 ********** 2026-03-29 05:37:27.618290 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:27.618297 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:27.618303 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:27.618309 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:27.618315 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:27.618322 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:27.618328 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:27.618334 | orchestrator | 2026-03-29 05:37:27.618340 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 05:37:27.618347 | orchestrator | Sunday 29 March 2026 05:37:25 +0000 (0:00:02.055) 0:01:32.987 ********** 2026-03-29 05:37:27.618353 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.618359 | orchestrator | 2026-03-29 05:37:27.618366 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 05:37:27.618372 | orchestrator | Sunday 29 March 2026 05:37:26 +0000 (0:00:01.128) 0:01:34.116 ********** 2026-03-29 05:37:27.618378 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:27.618384 | orchestrator | 2026-03-29 05:37:27.618396 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:37:42.023345 | orchestrator | Sunday 29 March 2026 05:37:27 +0000 (0:00:01.227) 0:01:35.343 ********** 2026-03-29 05:37:42.023461 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.023478 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.023490 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.023501 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.023512 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.023522 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.023534 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.023545 | orchestrator | 2026-03-29 05:37:42.023557 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 05:37:42.023568 | orchestrator | Sunday 29 March 2026 05:37:29 +0000 (0:00:02.368) 0:01:37.712 ********** 2026-03-29 05:37:42.023579 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.023592 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.023604 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.023615 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.023625 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.023636 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.023676 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.023697 | orchestrator | 2026-03-29 05:37:42.023717 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 05:37:42.023736 | orchestrator | Sunday 29 March 2026 05:37:31 +0000 (0:00:01.910) 0:01:39.622 ********** 2026-03-29 05:37:42.023756 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.023769 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.023780 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.023791 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.023801 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.023812 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.023845 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.023856 | orchestrator | 2026-03-29 05:37:42.023868 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 05:37:42.023881 | orchestrator | Sunday 29 March 2026 05:37:33 +0000 (0:00:02.049) 0:01:41.672 ********** 2026-03-29 05:37:42.023895 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.023907 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.023920 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.023933 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.023945 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.023958 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.023971 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.023984 | orchestrator | 2026-03-29 05:37:42.023996 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 05:37:42.024010 | orchestrator | Sunday 29 March 2026 05:37:35 +0000 (0:00:01.896) 0:01:43.568 ********** 2026-03-29 05:37:42.024023 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.024036 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.024049 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.024062 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.024074 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.024087 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.024100 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.024112 | orchestrator | 2026-03-29 05:37:42.024125 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 05:37:42.024138 | orchestrator | Sunday 29 March 2026 05:37:37 +0000 (0:00:01.960) 0:01:45.529 ********** 2026-03-29 05:37:42.024151 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.024163 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.024176 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.024188 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.024201 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.024215 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.024227 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.024240 | orchestrator | 2026-03-29 05:37:42.024268 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 05:37:42.024280 | orchestrator | Sunday 29 March 2026 05:37:39 +0000 (0:00:01.960) 0:01:47.489 ********** 2026-03-29 05:37:42.024291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.024302 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.024313 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.024324 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.024335 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.024345 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:42.024356 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:42.024367 | orchestrator | 2026-03-29 05:37:42.024378 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 05:37:42.024389 | orchestrator | Sunday 29 March 2026 05:37:41 +0000 (0:00:02.114) 0:01:49.603 ********** 2026-03-29 05:37:42.024402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.024484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.024535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.293710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293852 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:42.293866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.293928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.293969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.294002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee30bf19', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.294075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.294088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.294098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.294116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.294135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.518800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.518919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.518940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.518954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:42.518969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.519007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b0adc3c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.519073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.519087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.519101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.519114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'uuids': ['da8fc11e-6dfb-4dbe-b694-e6f7cad69a1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl']}})  2026-03-29 05:37:42.519136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be2200f0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.519149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c']}})  2026-03-29 05:37:42.519164 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:42.519172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.519186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.739696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.739799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.739817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP', 'dm-uuid-CRYPT-LUKS2-d6bcf8282f5d4cd9b60620cb55b2c90a-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:42.739830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.739859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'uuids': ['d6bcf828-2f5d-4cd9-b606-20cb55b2c90a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP']}})  2026-03-29 05:37:42.739904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f']}})  2026-03-29 05:37:42.739945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.739982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ccc377a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.740003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.740023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.740035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl', 'dm-uuid-CRYPT-LUKS2-da8fc11e6dfb4dbeb694e6f7cad69a1a-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:42.740046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.740066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'uuids': ['f13fc2e4-c586-4a34-95a4-f625771d43e0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp']}})  2026-03-29 05:37:42.776138 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:42.776264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93baa594', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.776285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056']}})  2026-03-29 05:37:42.776316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.776378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW', 'dm-uuid-CRYPT-LUKS2-a49c734036574bbbb8952c2cd9942323-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'uuids': ['5145feac-f6a0-43d9-bef0-ff6b872aac71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD']}})  2026-03-29 05:37:42.776446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'uuids': ['a49c7340-3657-4bbb-b895-2c2cd9942323'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW']}})  2026-03-29 05:37:42.776483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ef57056d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.776495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948']}})  2026-03-29 05:37:42.776506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.776526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844']}})  2026-03-29 05:37:42.878189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36bedc35', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:42.878317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:42.878414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp', 'dm-uuid-CRYPT-LUKS2-f13fc2e4c5864a3495a4f625771d43e0-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878445 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:42.878459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe', 'dm-uuid-CRYPT-LUKS2-36d885a21b3e42128c82194bcbfb2fb2-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:42.878483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'uuids': ['36d885a2-1b3e-4212-8c82-194bcbfb2fb2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe']}})  2026-03-29 05:37:42.878496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33']}})  2026-03-29 05:37:42.878517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '160e36ea', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:44.177406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD', 'dm-uuid-CRYPT-LUKS2-5145feacf6a043d9bef0ff6b872aac71-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177452 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:44.177465 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177534 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177552 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-38-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:37:44.177564 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177576 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177587 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.177609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '641edd66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:37:44.317034 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.317116 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:37:44.317126 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:44.317135 | orchestrator | 2026-03-29 05:37:44.317143 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 05:37:44.317150 | orchestrator | Sunday 29 March 2026 05:37:44 +0000 (0:00:02.298) 0:01:51.902 ********** 2026-03-29 05:37:44.317158 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317167 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317174 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317197 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.317267 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468739 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468757 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468768 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468805 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468816 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468858 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468870 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468884 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee30bf19', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee30bf19-1ab6-4918-a8c8-c92c337d13e6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468903 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.468925 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826542 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:37:44.826756 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826789 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826810 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826866 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826887 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826924 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826970 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.826993 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b0adc3c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1', 'scsi-SQEMU_QEMU_HARDDISK_9b0adc3c-7f5d-4894-a427-7dd9e74f1d22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.827029 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.827058 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:44.827081 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:37:44.827114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f', 'dm-uuid-LVM-0kHDhDCPHLGd2Fg1VzOlgDOeDKeaHucwfak19l6KqwOwdAXhRxsleFnI4v0OuiOl'], 'uuids': ['da8fc11e-6dfb-4dbe-b694-e6f7cad69a1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038522 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:37:45.038536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e', 'scsi-SQEMU_QEMU_HARDDISK_be2200f0-5502-47ad-8b86-f79404ad3d6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be2200f0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w79kNO-xrib-djNF-BC1b-oenW-947w-67KtbL', 'scsi-0QEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472', 'scsi-SQEMU_QEMU_HARDDISK_d786153b-aa88-42e2-b7c0-be41a0e4d472'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948', 'dm-uuid-LVM-VVVRanGAMYCBBo3Ea1Is2tjcYgwKNf2qA0QNo4TmjeChe8gjBEKp176k85VNMXVp'], 'uuids': ['f13fc2e4-c586-4a34-95a4-f625771d43e0'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a', 'scsi-SQEMU_QEMU_HARDDISK_93baa594-14d8-4050-b691-1dff11f6053a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93baa594', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXgjHK-x1j6-yafV-EcrV-Z8hS-LdwZ-h63E7O', 'scsi-0QEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0', 'scsi-SQEMU_QEMU_HARDDISK_2180dd6a-0158-4028-8893-0009518a5de0'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.038773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.213942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP', 'dm-uuid-CRYPT-LUKS2-d6bcf8282f5d4cd9b60620cb55b2c90a-kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW', 'dm-uuid-CRYPT-LUKS2-a49c734036574bbbb8952c2cd9942323-2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df205cf6--8b40--53f0--aec9--c93c6a681056-osd--block--df205cf6--8b40--53f0--aec9--c93c6a681056', 'dm-uuid-LVM-IXftd1VPXOpbncKd3f2ob1nYXsz4DemJ2XJQIMxaL0NRJ8j3ZeXDz0EJW4fLUFzW'], 'uuids': ['a49c7340-3657-4bbb-b895-2c2cd9942323'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2180dd6a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2XJQIM-xaL0-NRJ8-j3Ze-XDz0-EJW4-fLUFzW']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6a86fe60--1e0e--551e--abcc--872f54df7e3c-osd--block--6a86fe60--1e0e--551e--abcc--872f54df7e3c', 'dm-uuid-LVM-WmwWNP6o5LQNgrcvTESUpu2sCljSf9EJkfdNL8HsipxQGyavpLq36XQFDCYO8YrP'], 'uuids': ['d6bcf828-2f5d-4cd9-b606-20cb55b2c90a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd786153b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['kfdNL8-Hsip-xQGy-avpL-q36X-QFDC-YO8YrP']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TjrJ6N-vXHW-nYMX-XIsI-w8Ql-NkWF-pB5l7A', 'scsi-0QEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62', 'scsi-SQEMU_QEMU_HARDDISK_10b9e860-1cc5-4615-8ff0-9bdd7bb94f62'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10b9e860', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--eec6ab8e--cb01--5d55--a04b--fe63d54a2948-osd--block--eec6ab8e--cb01--5d55--a04b--fe63d54a2948']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.214195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-W8BXAo-VIeS-lNkU-0xsH-1v6j-IWb5-xeSbRL', 'scsi-0QEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249', 'scsi-SQEMU_QEMU_HARDDISK_3d42ed5a-37f6-4df6-b807-f02e933f3249'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3d42ed5a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--09734191--f9bf--5626--be02--fa226447c12f-osd--block--09734191--f9bf--5626--be02--fa226447c12f']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ccc377a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccc377a4-68eb-41df-b094-e638a3387548-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '36bedc35', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1', 'scsi-SQEMU_QEMU_HARDDISK_36bedc35-435f-4980-812f-4ca1d4f6c7bb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290858 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.290883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33', 'dm-uuid-LVM-ZRBeHs6onLIpNjnfPONnwMoGWYFOYt3b0sOhEPSSzOPtCa3muL1oqHvJG7beZNDD'], 'uuids': ['5145feac-f6a0-43d9-bef0-ff6b872aac71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404603 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl', 'dm-uuid-CRYPT-LUKS2-da8fc11e6dfb4dbeb694e6f7cad69a1a-fak19l-6Kqw-OwdA-XhRx-sleF-nI4v-0OuiOl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404636 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404644 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:37:45.404737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b', 'scsi-SQEMU_QEMU_HARDDISK_ef57056d-cdc7-4754-ab80-1b6d0ee4138b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ef57056d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404761 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-38-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FIE3VR-hmEq-gbau-KgWX-Ie3n-RrWX-Y63w2o', 'scsi-0QEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735', 'scsi-SQEMU_QEMU_HARDDISK_ee98996d-a6b6-4070-b987-1a6503ed9735'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp', 'dm-uuid-CRYPT-LUKS2-f13fc2e4c5864a3495a4f625771d43e0-A0QNo4-Tmje-Che8-gjBE-Kp17-6k85-VNMXVp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404797 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:37:45.404804 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.404815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442093 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442208 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442278 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '641edd66', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1', 'scsi-SQEMU_QEMU_HARDDISK_641edd66-c7f1-4829-b4ab-a5be1c0d9fdc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442292 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442327 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442339 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:45.442351 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe', 'dm-uuid-CRYPT-LUKS2-36d885a21b3e42128c82194bcbfb2fb2-dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:45.442379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--0734d53c--ec7b--5877--b2ad--f9abf7f5e844-osd--block--0734d53c--ec7b--5877--b2ad--f9abf7f5e844', 'dm-uuid-LVM-VJ9z4eyflUTf2lcw8J1Bh3VXDEKKGuPmdvxBFAfXTwWZGF4ojvc0MEIvaFMTSMoe'], 'uuids': ['36d885a2-1b3e-4212-8c82-194bcbfb2fb2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ee98996d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['dvxBFA-fXTw-WZGF-4ojv-c0ME-IvaF-MTSMoe']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-VkIrl1-06lK-dW9p-hM1X-TIpn-uX5t-oclg00', 'scsi-0QEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa', 'scsi-SQEMU_QEMU_HARDDISK_002a7ab0-e850-4de5-8841-9c71e722e4fa'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '002a7ab0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33-osd--block--4fd3485f--e3e8--5c51--9ad0--4caa09a6fb33']}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064853 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '160e36ea', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1', 'scsi-SQEMU_QEMU_HARDDISK_160e36ea-4e1e-4f6f-a576-5c1ba660feb6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD', 'dm-uuid-CRYPT-LUKS2-5145feacf6a043d9bef0ff6b872aac71-0sOhEP-SSzO-PtCa-3muL-1oqH-vJG7-beZNDD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:37:54.064923 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:37:54.064932 | orchestrator | 2026-03-29 05:37:54.064939 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 05:37:54.064947 | orchestrator | Sunday 29 March 2026 05:37:46 +0000 (0:00:02.522) 0:01:54.425 ********** 2026-03-29 05:37:54.064953 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:54.064961 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:54.064967 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:54.064978 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:54.064990 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:54.065001 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:54.065025 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:54.065036 | orchestrator | 2026-03-29 05:37:54.065048 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 05:37:54.065059 | orchestrator | Sunday 29 March 2026 05:37:49 +0000 (0:00:02.570) 0:01:56.996 ********** 2026-03-29 05:37:54.065070 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:54.065082 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:54.065089 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:54.065095 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:54.065102 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:54.065108 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:54.065115 | orchestrator | ok: [testbed-manager] 2026-03-29 05:37:54.065121 | orchestrator | 2026-03-29 05:37:54.065128 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:37:54.065134 | orchestrator | Sunday 29 March 2026 05:37:51 +0000 (0:00:01.863) 0:01:58.860 ********** 2026-03-29 05:37:54.065141 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:37:54.065147 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:37:54.065154 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:37:54.065161 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:37:54.065170 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:37:54.065184 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:37:54.065191 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:37:54.065199 | orchestrator | 2026-03-29 05:37:54.065207 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:37:54.065215 | orchestrator | Sunday 29 March 2026 05:37:53 +0000 (0:00:02.733) 0:02:01.593 ********** 2026-03-29 05:37:54.065230 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:38:23.355255 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:38:23.355391 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:38:23.355418 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.355430 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.355440 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.355452 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:38:23.355463 | orchestrator | 2026-03-29 05:38:23.355475 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:38:23.355488 | orchestrator | Sunday 29 March 2026 05:37:55 +0000 (0:00:01.877) 0:02:03.471 ********** 2026-03-29 05:38:23.355499 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:38:23.355515 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:38:23.355532 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:38:23.355552 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.355572 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.355590 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.355607 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-29 05:38:23.355632 | orchestrator | 2026-03-29 05:38:23.355655 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:38:23.355694 | orchestrator | Sunday 29 March 2026 05:37:58 +0000 (0:00:02.559) 0:02:06.031 ********** 2026-03-29 05:38:23.355773 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:38:23.355791 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:38:23.355810 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:38:23.355827 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.355845 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.355862 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.355880 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:38:23.355899 | orchestrator | 2026-03-29 05:38:23.355916 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 05:38:23.355933 | orchestrator | Sunday 29 March 2026 05:38:00 +0000 (0:00:01.874) 0:02:07.905 ********** 2026-03-29 05:38:23.355952 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:38:23.355971 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-29 05:38:23.355988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 05:38:23.356006 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-29 05:38:23.356024 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-29 05:38:23.356043 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 05:38:23.356061 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-29 05:38:23.356080 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 05:38:23.356099 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-29 05:38:23.356118 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 05:38:23.356138 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 05:38:23.356157 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 05:38:23.356175 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 05:38:23.356186 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-29 05:38:23.356197 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 05:38:23.356209 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 05:38:23.356220 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 05:38:23.356230 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 05:38:23.356269 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 05:38:23.356281 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 05:38:23.356292 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 05:38:23.356302 | orchestrator | 2026-03-29 05:38:23.356313 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 05:38:23.356324 | orchestrator | Sunday 29 March 2026 05:38:03 +0000 (0:00:03.112) 0:02:11.017 ********** 2026-03-29 05:38:23.356335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:38:23.356347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:38:23.356358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:38:23.356368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:38:23.356379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 05:38:23.356390 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 05:38:23.356400 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 05:38:23.356411 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:38:23.356421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 05:38:23.356432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 05:38:23.356443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 05:38:23.356453 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:38:23.356464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 05:38:23.356475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 05:38:23.356486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 05:38:23.356496 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.356507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 05:38:23.356518 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 05:38:23.356528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 05:38:23.356539 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.356549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 05:38:23.356560 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 05:38:23.356571 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 05:38:23.356581 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.356592 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 05:38:23.356625 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 05:38:23.356637 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 05:38:23.356648 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:38:23.356658 | orchestrator | 2026-03-29 05:38:23.356669 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 05:38:23.356680 | orchestrator | Sunday 29 March 2026 05:38:05 +0000 (0:00:02.359) 0:02:13.377 ********** 2026-03-29 05:38:23.356691 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:38:23.356733 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:38:23.356752 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:38:23.356764 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:38:23.356775 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 05:38:23.356787 | orchestrator | 2026-03-29 05:38:23.356798 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 05:38:23.356819 | orchestrator | Sunday 29 March 2026 05:38:07 +0000 (0:00:02.166) 0:02:15.544 ********** 2026-03-29 05:38:23.356830 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.356841 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.356862 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.356873 | orchestrator | 2026-03-29 05:38:23.356884 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 05:38:23.356895 | orchestrator | Sunday 29 March 2026 05:38:09 +0000 (0:00:01.540) 0:02:17.085 ********** 2026-03-29 05:38:23.356906 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.356916 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.356927 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.356937 | orchestrator | 2026-03-29 05:38:23.356948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 05:38:23.356959 | orchestrator | Sunday 29 March 2026 05:38:10 +0000 (0:00:01.351) 0:02:18.436 ********** 2026-03-29 05:38:23.356970 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.356980 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:38:23.356991 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:38:23.357001 | orchestrator | 2026-03-29 05:38:23.357012 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 05:38:23.357023 | orchestrator | Sunday 29 March 2026 05:38:12 +0000 (0:00:01.373) 0:02:19.810 ********** 2026-03-29 05:38:23.357034 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:38:23.357045 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:38:23.357056 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:38:23.357066 | orchestrator | 2026-03-29 05:38:23.357077 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 05:38:23.357088 | orchestrator | Sunday 29 March 2026 05:38:13 +0000 (0:00:01.458) 0:02:21.268 ********** 2026-03-29 05:38:23.357099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 05:38:23.357109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 05:38:23.357120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 05:38:23.357131 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.357141 | orchestrator | 2026-03-29 05:38:23.357152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 05:38:23.357163 | orchestrator | Sunday 29 March 2026 05:38:14 +0000 (0:00:01.355) 0:02:22.624 ********** 2026-03-29 05:38:23.357173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 05:38:23.357184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 05:38:23.357195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 05:38:23.357205 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.357216 | orchestrator | 2026-03-29 05:38:23.357227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 05:38:23.357238 | orchestrator | Sunday 29 March 2026 05:38:16 +0000 (0:00:01.671) 0:02:24.295 ********** 2026-03-29 05:38:23.357248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 05:38:23.357259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 05:38:23.357270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 05:38:23.357280 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:38:23.357291 | orchestrator | 2026-03-29 05:38:23.357302 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 05:38:23.357313 | orchestrator | Sunday 29 March 2026 05:38:18 +0000 (0:00:01.656) 0:02:25.952 ********** 2026-03-29 05:38:23.357323 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:38:23.357334 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:38:23.357345 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:38:23.357356 | orchestrator | 2026-03-29 05:38:23.357367 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 05:38:23.357377 | orchestrator | Sunday 29 March 2026 05:38:19 +0000 (0:00:01.583) 0:02:27.536 ********** 2026-03-29 05:38:23.357388 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 05:38:23.357399 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 05:38:23.357409 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 05:38:23.357427 | orchestrator | 2026-03-29 05:38:23.357438 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 05:38:23.357449 | orchestrator | Sunday 29 March 2026 05:38:21 +0000 (0:00:01.554) 0:02:29.090 ********** 2026-03-29 05:38:23.357459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:38:23.357470 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:38:23.357482 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:38:23.357493 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:38:23.357504 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:38:23.357521 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:39:11.698145 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:39:11.698287 | orchestrator | 2026-03-29 05:39:11.698323 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 05:39:11.698342 | orchestrator | Sunday 29 March 2026 05:38:23 +0000 (0:00:01.989) 0:02:31.079 ********** 2026-03-29 05:39:11.698361 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:39:11.698380 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:39:11.698399 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:39:11.698417 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:39:11.698436 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:39:11.698476 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:39:11.698490 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:39:11.698506 | orchestrator | 2026-03-29 05:39:11.698525 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-29 05:39:11.698544 | orchestrator | Sunday 29 March 2026 05:38:26 +0000 (0:00:02.831) 0:02:33.911 ********** 2026-03-29 05:39:11.698564 | orchestrator | changed: [testbed-manager] 2026-03-29 05:39:11.698584 | orchestrator | changed: [testbed-node-4] 2026-03-29 05:39:11.698682 | orchestrator | changed: [testbed-node-5] 2026-03-29 05:39:11.698706 | orchestrator | changed: [testbed-node-3] 2026-03-29 05:39:11.698726 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:39:11.698785 | orchestrator | changed: [testbed-node-2] 2026-03-29 05:39:11.698805 | orchestrator | changed: [testbed-node-1] 2026-03-29 05:39:11.698825 | orchestrator | 2026-03-29 05:39:11.698846 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-29 05:39:11.698866 | orchestrator | Sunday 29 March 2026 05:38:37 +0000 (0:00:11.286) 0:02:45.198 ********** 2026-03-29 05:39:11.698885 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.698899 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.698912 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.698925 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.698938 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.698950 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.698961 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.698971 | orchestrator | 2026-03-29 05:39:11.698982 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-29 05:39:11.698993 | orchestrator | Sunday 29 March 2026 05:38:39 +0000 (0:00:02.045) 0:02:47.244 ********** 2026-03-29 05:39:11.699004 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699015 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699025 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.699036 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.699047 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.699082 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.699094 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699105 | orchestrator | 2026-03-29 05:39:11.699116 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-29 05:39:11.699126 | orchestrator | Sunday 29 March 2026 05:38:41 +0000 (0:00:01.876) 0:02:49.120 ********** 2026-03-29 05:39:11.699137 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699148 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:39:11.699159 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:39:11.699169 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:39:11.699180 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:39:11.699190 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:39:11.699201 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:39:11.699211 | orchestrator | 2026-03-29 05:39:11.699222 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-29 05:39:11.699274 | orchestrator | Sunday 29 March 2026 05:38:44 +0000 (0:00:02.976) 0:02:52.097 ********** 2026-03-29 05:39:11.699289 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 05:39:11.699302 | orchestrator | 2026-03-29 05:39:11.699313 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-29 05:39:11.699325 | orchestrator | Sunday 29 March 2026 05:38:47 +0000 (0:00:02.796) 0:02:54.893 ********** 2026-03-29 05:39:11.699336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699347 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699358 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.699369 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.699380 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.699390 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.699401 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699412 | orchestrator | 2026-03-29 05:39:11.699423 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-29 05:39:11.699434 | orchestrator | Sunday 29 March 2026 05:38:49 +0000 (0:00:01.924) 0:02:56.818 ********** 2026-03-29 05:39:11.699445 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699455 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699466 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.699528 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.699540 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.699551 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.699562 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699572 | orchestrator | 2026-03-29 05:39:11.699583 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-29 05:39:11.699594 | orchestrator | Sunday 29 March 2026 05:38:51 +0000 (0:00:02.100) 0:02:58.919 ********** 2026-03-29 05:39:11.699605 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699615 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699627 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.699646 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.699664 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.699709 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.699729 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699747 | orchestrator | 2026-03-29 05:39:11.699824 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-29 05:39:11.699837 | orchestrator | Sunday 29 March 2026 05:38:53 +0000 (0:00:01.862) 0:03:00.782 ********** 2026-03-29 05:39:11.699847 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699858 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699869 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.699879 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.699890 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.699901 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.699924 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.699935 | orchestrator | 2026-03-29 05:39:11.699946 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-29 05:39:11.699957 | orchestrator | Sunday 29 March 2026 05:38:55 +0000 (0:00:02.068) 0:03:02.850 ********** 2026-03-29 05:39:11.699967 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.699987 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.699998 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700009 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700019 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700030 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700041 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700051 | orchestrator | 2026-03-29 05:39:11.700062 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-29 05:39:11.700073 | orchestrator | Sunday 29 March 2026 05:38:57 +0000 (0:00:01.899) 0:03:04.750 ********** 2026-03-29 05:39:11.700084 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700095 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700105 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700116 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700126 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700137 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700147 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700158 | orchestrator | 2026-03-29 05:39:11.700169 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-29 05:39:11.700180 | orchestrator | Sunday 29 March 2026 05:38:59 +0000 (0:00:02.072) 0:03:06.823 ********** 2026-03-29 05:39:11.700191 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700201 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700212 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700222 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700233 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700244 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700254 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700265 | orchestrator | 2026-03-29 05:39:11.700276 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-29 05:39:11.700286 | orchestrator | Sunday 29 March 2026 05:39:01 +0000 (0:00:02.159) 0:03:08.983 ********** 2026-03-29 05:39:11.700297 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700308 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700318 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700329 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700340 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700350 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700361 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700371 | orchestrator | 2026-03-29 05:39:11.700382 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-29 05:39:11.700393 | orchestrator | Sunday 29 March 2026 05:39:03 +0000 (0:00:02.085) 0:03:11.068 ********** 2026-03-29 05:39:11.700404 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700414 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700425 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700436 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700447 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700468 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700479 | orchestrator | 2026-03-29 05:39:11.700490 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-29 05:39:11.700500 | orchestrator | Sunday 29 March 2026 05:39:05 +0000 (0:00:02.055) 0:03:13.124 ********** 2026-03-29 05:39:11.700511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700522 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700540 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700551 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700562 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700572 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700583 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700594 | orchestrator | 2026-03-29 05:39:11.700605 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-29 05:39:11.700615 | orchestrator | Sunday 29 March 2026 05:39:07 +0000 (0:00:01.837) 0:03:14.961 ********** 2026-03-29 05:39:11.700626 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700637 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700647 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700658 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700669 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700679 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700690 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700700 | orchestrator | 2026-03-29 05:39:11.700711 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-29 05:39:11.700722 | orchestrator | Sunday 29 March 2026 05:39:09 +0000 (0:00:02.273) 0:03:17.235 ********** 2026-03-29 05:39:11.700733 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:11.700743 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:11.700816 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:11.700838 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:11.700858 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:11.700874 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:11.700892 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:11.700910 | orchestrator | 2026-03-29 05:39:11.700930 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-29 05:39:11.700961 | orchestrator | Sunday 29 March 2026 05:39:11 +0000 (0:00:02.178) 0:03:19.414 ********** 2026-03-29 05:39:32.031574 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.031724 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.031752 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.031775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 05:39:32.031849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 05:39:32.031861 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.031872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 05:39:32.031899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 05:39:32.031911 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.031922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 05:39:32.031934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 05:39:32.031945 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.031956 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.031967 | orchestrator | 2026-03-29 05:39:32.031979 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-29 05:39:32.031992 | orchestrator | Sunday 29 March 2026 05:39:13 +0000 (0:00:01.913) 0:03:21.327 ********** 2026-03-29 05:39:32.032003 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032014 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032025 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032059 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032070 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032082 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032095 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032108 | orchestrator | 2026-03-29 05:39:32.032122 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-29 05:39:32.032134 | orchestrator | Sunday 29 March 2026 05:39:15 +0000 (0:00:01.845) 0:03:23.173 ********** 2026-03-29 05:39:32.032147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032160 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032173 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032186 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032199 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032211 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032224 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032236 | orchestrator | 2026-03-29 05:39:32.032249 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-29 05:39:32.032263 | orchestrator | Sunday 29 March 2026 05:39:17 +0000 (0:00:01.946) 0:03:25.120 ********** 2026-03-29 05:39:32.032276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032288 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032301 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032314 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032326 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032339 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032352 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032365 | orchestrator | 2026-03-29 05:39:32.032378 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-29 05:39:32.032391 | orchestrator | Sunday 29 March 2026 05:39:19 +0000 (0:00:01.646) 0:03:26.766 ********** 2026-03-29 05:39:32.032403 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032417 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032443 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032454 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032465 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032475 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032486 | orchestrator | 2026-03-29 05:39:32.032497 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-29 05:39:32.032508 | orchestrator | Sunday 29 March 2026 05:39:21 +0000 (0:00:02.005) 0:03:28.771 ********** 2026-03-29 05:39:32.032519 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032530 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032540 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032551 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032562 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032573 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032583 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032594 | orchestrator | 2026-03-29 05:39:32.032605 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-29 05:39:32.032616 | orchestrator | Sunday 29 March 2026 05:39:23 +0000 (0:00:02.004) 0:03:30.775 ********** 2026-03-29 05:39:32.032627 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032637 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032648 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032659 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.032669 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.032680 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.032691 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032701 | orchestrator | 2026-03-29 05:39:32.032712 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-29 05:39:32.032723 | orchestrator | Sunday 29 March 2026 05:39:24 +0000 (0:00:01.956) 0:03:32.731 ********** 2026-03-29 05:39:32.032743 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:32.032754 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:32.032827 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:32.032842 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:32.032853 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 05:39:32.032865 | orchestrator | 2026-03-29 05:39:32.032876 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-29 05:39:32.032886 | orchestrator | Sunday 29 March 2026 05:39:27 +0000 (0:00:02.562) 0:03:35.294 ********** 2026-03-29 05:39:32.032898 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:39:32.032910 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:39:32.032920 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:39:32.032931 | orchestrator | 2026-03-29 05:39:32.032942 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-29 05:39:32.032953 | orchestrator | Sunday 29 March 2026 05:39:28 +0000 (0:00:01.401) 0:03:36.696 ********** 2026-03-29 05:39:32.032970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 05:39:32.032981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 05:39:32.032992 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.033004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 05:39:32.033015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 05:39:32.033026 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.033037 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 05:39:32.033048 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 05:39:32.033059 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.033070 | orchestrator | 2026-03-29 05:39:32.033080 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-29 05:39:32.033092 | orchestrator | Sunday 29 March 2026 05:39:30 +0000 (0:00:01.425) 0:03:38.122 ********** 2026-03-29 05:39:32.033104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033130 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:32.033141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033172 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:32.033183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:32.033206 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:32.033217 | orchestrator | 2026-03-29 05:39:32.033228 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-29 05:39:32.033246 | orchestrator | Sunday 29 March 2026 05:39:32 +0000 (0:00:01.629) 0:03:39.752 ********** 2026-03-29 05:39:40.879247 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:40.879366 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:40.879383 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:40.879396 | orchestrator | 2026-03-29 05:39:40.879409 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-29 05:39:40.879421 | orchestrator | Sunday 29 March 2026 05:39:33 +0000 (0:00:01.315) 0:03:41.067 ********** 2026-03-29 05:39:40.879432 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:40.879443 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:40.879455 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:40.879466 | orchestrator | 2026-03-29 05:39:40.879477 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-29 05:39:40.879488 | orchestrator | Sunday 29 March 2026 05:39:34 +0000 (0:00:01.343) 0:03:42.411 ********** 2026-03-29 05:39:40.879499 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:40.879510 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:40.879538 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:40.879550 | orchestrator | 2026-03-29 05:39:40.879561 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-29 05:39:40.879573 | orchestrator | Sunday 29 March 2026 05:39:36 +0000 (0:00:01.347) 0:03:43.759 ********** 2026-03-29 05:39:40.879584 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:40.879595 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:40.879606 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:40.879617 | orchestrator | 2026-03-29 05:39:40.879628 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-29 05:39:40.879639 | orchestrator | Sunday 29 March 2026 05:39:37 +0000 (0:00:01.291) 0:03:45.051 ********** 2026-03-29 05:39:40.879651 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}) 2026-03-29 05:39:40.879664 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}) 2026-03-29 05:39:40.879675 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}) 2026-03-29 05:39:40.879686 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}) 2026-03-29 05:39:40.879697 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}) 2026-03-29 05:39:40.879708 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}) 2026-03-29 05:39:40.879741 | orchestrator | 2026-03-29 05:39:40.879755 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-29 05:39:40.879769 | orchestrator | Sunday 29 March 2026 05:39:39 +0000 (0:00:02.107) 0:03:47.159 ********** 2026-03-29 05:39:40.879818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c/osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1774752755.2604833, 'mtime': 1774752755.2554832, 'ctime': 1774752755.2554832, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c/osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:40.879882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-09734191-f9bf-5626-be02-fa226447c12f/osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1774752776.4328551, 'mtime': 1774752776.428855, 'ctime': 1774752776.428855, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-09734191-f9bf-5626-be02-fa226447c12f/osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:40.879903 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:40.879917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-df205cf6-8b40-53f0-aec9-c93c6a681056/osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774752755.020052, 'mtime': 1774752755.0150518, 'ctime': 1774752755.0150518, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-df205cf6-8b40-53f0-aec9-c93c6a681056/osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:40.879942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948/osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774752776.2754204, 'mtime': 1774752776.2704203, 'ctime': 1774752776.2704203, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948/osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:40.879956 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:40.879983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844/osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774752755.5348525, 'mtime': 1774752755.5318525, 'ctime': 1774752755.5318525, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844/osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641090 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33/osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774752774.012181, 'mtime': 1774752774.009181, 'ctime': 1774752774.009181, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33/osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641235 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:46.641266 | orchestrator | 2026-03-29 05:39:46.641284 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-29 05:39:46.641302 | orchestrator | Sunday 29 March 2026 05:39:40 +0000 (0:00:01.455) 0:03:48.614 ********** 2026-03-29 05:39:46.641319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 05:39:46.641335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 05:39:46.641352 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:46.641368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 05:39:46.641385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 05:39:46.641402 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:46.641418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 05:39:46.641436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 05:39:46.641451 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:46.641466 | orchestrator | 2026-03-29 05:39:46.641476 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-29 05:39:46.641487 | orchestrator | Sunday 29 March 2026 05:39:42 +0000 (0:00:01.303) 0:03:49.917 ********** 2026-03-29 05:39:46.641498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:46.641529 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641584 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:46.641593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641614 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641626 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:46.641637 | orchestrator | 2026-03-29 05:39:46.641648 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-29 05:39:46.641659 | orchestrator | Sunday 29 March 2026 05:39:43 +0000 (0:00:01.420) 0:03:51.338 ********** 2026-03-29 05:39:46.641671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'})  2026-03-29 05:39:46.641682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'})  2026-03-29 05:39:46.641693 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:46.641704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'})  2026-03-29 05:39:46.641716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'})  2026-03-29 05:39:46.641726 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:46.641738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'})  2026-03-29 05:39:46.641749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'})  2026-03-29 05:39:46.641759 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:46.641770 | orchestrator | 2026-03-29 05:39:46.641781 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-29 05:39:46.641792 | orchestrator | Sunday 29 March 2026 05:39:45 +0000 (0:00:01.653) 0:03:52.992 ********** 2026-03-29 05:39:46.641833 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-6a86fe60-1e0e-551e-abcc-872f54df7e3c', 'data_vg': 'ceph-6a86fe60-1e0e-551e-abcc-872f54df7e3c'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-09734191-f9bf-5626-be02-fa226447c12f', 'data_vg': 'ceph-09734191-f9bf-5626-be02-fa226447c12f'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641868 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:46.641880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-df205cf6-8b40-53f0-aec9-c93c6a681056', 'data_vg': 'ceph-df205cf6-8b40-53f0-aec9-c93c6a681056'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-eec6ab8e-cb01-5d55-a04b-fe63d54a2948', 'data_vg': 'ceph-eec6ab8e-cb01-5d55-a04b-fe63d54a2948'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641902 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:46.641913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-0734d53c-ec7b-5877-b2ad-f9abf7f5e844', 'data_vg': 'ceph-0734d53c-ec7b-5877-b2ad-f9abf7f5e844'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:46.641944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33', 'data_vg': 'ceph-4fd3485f-e3e8-5c51-9ad0-4caa09a6fb33'}, 'ansible_loop_var': 'item'})  2026-03-29 05:39:56.067951 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:56.068051 | orchestrator | 2026-03-29 05:39:56.068060 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-29 05:39:56.068066 | orchestrator | Sunday 29 March 2026 05:39:46 +0000 (0:00:01.372) 0:03:54.364 ********** 2026-03-29 05:39:56.068071 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:56.068076 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:56.068081 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:56.068086 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:56.068090 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:56.068095 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:56.068100 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:56.068105 | orchestrator | 2026-03-29 05:39:56.068110 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-29 05:39:56.068115 | orchestrator | Sunday 29 March 2026 05:39:48 +0000 (0:00:01.858) 0:03:56.223 ********** 2026-03-29 05:39:56.068120 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:56.068125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:56.068129 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:39:56.068133 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:39:56.068139 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 05:39:56.068143 | orchestrator | 2026-03-29 05:39:56.068148 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-29 05:39:56.068152 | orchestrator | Sunday 29 March 2026 05:39:51 +0000 (0:00:02.520) 0:03:58.743 ********** 2026-03-29 05:39:56.068157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068205 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:56.068210 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:56.068233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068273 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:56.068277 | orchestrator | 2026-03-29 05:39:56.068282 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-29 05:39:56.068286 | orchestrator | Sunday 29 March 2026 05:39:52 +0000 (0:00:01.396) 0:04:00.139 ********** 2026-03-29 05:39:56.068291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068347 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:56.068361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068400 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:56.068407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068526 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:56.068531 | orchestrator | 2026-03-29 05:39:56.068537 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-29 05:39:56.068543 | orchestrator | Sunday 29 March 2026 05:39:54 +0000 (0:00:01.722) 0:04:01.862 ********** 2026-03-29 05:39:56.068555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068582 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:39:56.068588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068610 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:39:56.068615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 05:39:56.068637 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:39:56.068642 | orchestrator | 2026-03-29 05:39:56.068649 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-29 05:39:56.068654 | orchestrator | Sunday 29 March 2026 05:39:55 +0000 (0:00:01.519) 0:04:03.382 ********** 2026-03-29 05:39:56.068658 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:39:56.068663 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:39:56.068674 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646341 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646433 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646442 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646448 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646454 | orchestrator | 2026-03-29 05:40:10.646460 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-29 05:40:10.646467 | orchestrator | Sunday 29 March 2026 05:39:57 +0000 (0:00:01.999) 0:04:05.381 ********** 2026-03-29 05:40:10.646472 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646479 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646484 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646490 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646495 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646500 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646522 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646528 | orchestrator | 2026-03-29 05:40:10.646534 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-29 05:40:10.646539 | orchestrator | Sunday 29 March 2026 05:39:59 +0000 (0:00:02.150) 0:04:07.532 ********** 2026-03-29 05:40:10.646545 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646550 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646556 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646561 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646566 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646572 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646577 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646582 | orchestrator | 2026-03-29 05:40:10.646588 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-29 05:40:10.646594 | orchestrator | Sunday 29 March 2026 05:40:01 +0000 (0:00:02.018) 0:04:09.550 ********** 2026-03-29 05:40:10.646600 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646605 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646610 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646615 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646621 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646626 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646631 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646636 | orchestrator | 2026-03-29 05:40:10.646641 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-29 05:40:10.646648 | orchestrator | Sunday 29 March 2026 05:40:03 +0000 (0:00:01.962) 0:04:11.513 ********** 2026-03-29 05:40:10.646653 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646658 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646664 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646669 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646674 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646679 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646684 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646690 | orchestrator | 2026-03-29 05:40:10.646695 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-29 05:40:10.646700 | orchestrator | Sunday 29 March 2026 05:40:05 +0000 (0:00:02.056) 0:04:13.570 ********** 2026-03-29 05:40:10.646706 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646711 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646716 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646721 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646726 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646732 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646737 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646742 | orchestrator | 2026-03-29 05:40:10.646747 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-29 05:40:10.646753 | orchestrator | Sunday 29 March 2026 05:40:07 +0000 (0:00:01.959) 0:04:15.529 ********** 2026-03-29 05:40:10.646758 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646763 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.646769 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.646774 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:10.646779 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:10.646785 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:10.646790 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:10.646795 | orchestrator | 2026-03-29 05:40:10.646800 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-29 05:40:10.646806 | orchestrator | Sunday 29 March 2026 05:40:09 +0000 (0:00:02.073) 0:04:17.602 ********** 2026-03-29 05:40:10.646813 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:10.646867 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:10.646875 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:10.646883 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:10.646889 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:10.646907 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:10.646926 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:10.646934 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:10.646940 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:10.646946 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:10.646952 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:10.646958 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:10.646964 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:10.646971 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:10.646977 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:10.646983 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:10.646989 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:10.646996 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:10.647002 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:10.647009 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:10.647015 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:10.647021 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:10.647032 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:10.647038 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:10.647044 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:10.647050 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:10.647057 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:10.647063 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:10.647069 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:10.647078 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:10.647088 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.452249 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.452384 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:15.452414 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.452438 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.452458 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.452479 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:15.452498 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.452511 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.452522 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.452533 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.452544 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.452555 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.452592 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.452604 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.452615 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.452626 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:15.452637 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.452647 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:15.452658 | orchestrator | 2026-03-29 05:40:15.452669 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-29 05:40:15.452681 | orchestrator | Sunday 29 March 2026 05:40:12 +0000 (0:00:02.298) 0:04:19.901 ********** 2026-03-29 05:40:15.452692 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:15.452702 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:15.452713 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:15.452723 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:15.452734 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:15.452744 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:15.452755 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:15.452765 | orchestrator | 2026-03-29 05:40:15.452776 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-29 05:40:15.452790 | orchestrator | Sunday 29 March 2026 05:40:14 +0000 (0:00:02.379) 0:04:22.280 ********** 2026-03-29 05:40:15.452803 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.452816 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.452868 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.452880 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.452911 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.452923 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.452933 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:15.452944 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.452955 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.452965 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.452985 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.452996 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.453007 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.453017 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:15.453028 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.453040 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.453051 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.453062 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.453072 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:15.453083 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:15.453094 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:15.453105 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.453116 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.453126 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:15.453137 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.453147 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:15.453158 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:15.453174 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:15.453193 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:42.608005 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:42.608127 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:42.608172 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:42.608185 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.608199 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:42.608210 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:42.608221 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:42.608235 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:42.608246 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-29 05:40:42.608257 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-29 05:40:42.608267 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-29 05:40:42.608278 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-29 05:40:42.608289 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:42.608300 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:42.608311 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.608321 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-29 05:40:42.608333 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:42.608343 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.608355 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-29 05:40:42.608366 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.608377 | orchestrator | 2026-03-29 05:40:42.608389 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-29 05:40:42.608401 | orchestrator | Sunday 29 March 2026 05:40:16 +0000 (0:00:02.252) 0:04:24.533 ********** 2026-03-29 05:40:42.608411 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.608422 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.608433 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.608444 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.608456 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.608468 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.608489 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.608502 | orchestrator | 2026-03-29 05:40:42.608515 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-29 05:40:42.608528 | orchestrator | Sunday 29 March 2026 05:40:18 +0000 (0:00:01.904) 0:04:26.437 ********** 2026-03-29 05:40:42.608554 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.608566 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.608576 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.608587 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.608598 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.608608 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.608619 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.608630 | orchestrator | 2026-03-29 05:40:42.608641 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-29 05:40:42.608671 | orchestrator | Sunday 29 March 2026 05:40:20 +0000 (0:00:01.762) 0:04:28.200 ********** 2026-03-29 05:40:42.608683 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.608694 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.608709 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.608727 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.608745 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.608762 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.608780 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.608798 | orchestrator | 2026-03-29 05:40:42.608818 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-29 05:40:42.608836 | orchestrator | Sunday 29 March 2026 05:40:22 +0000 (0:00:01.988) 0:04:30.189 ********** 2026-03-29 05:40:42.608855 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 05:40:42.608905 | orchestrator | 2026-03-29 05:40:42.608916 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-29 05:40:42.608928 | orchestrator | Sunday 29 March 2026 05:40:24 +0000 (0:00:02.353) 0:04:32.542 ********** 2026-03-29 05:40:42.608939 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.608950 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.608961 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.608971 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.608982 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.608993 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.609003 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-29 05:40:42.609014 | orchestrator | 2026-03-29 05:40:42.609024 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-29 05:40:42.609035 | orchestrator | Sunday 29 March 2026 05:40:26 +0000 (0:00:01.994) 0:04:34.537 ********** 2026-03-29 05:40:42.609046 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.609056 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.609067 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.609078 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.609089 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.609099 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.609110 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.609121 | orchestrator | 2026-03-29 05:40:42.609132 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-29 05:40:42.609142 | orchestrator | Sunday 29 March 2026 05:40:28 +0000 (0:00:01.969) 0:04:36.507 ********** 2026-03-29 05:40:42.609154 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.609173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.609184 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.609195 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.609205 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.609216 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.609227 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.609237 | orchestrator | 2026-03-29 05:40:42.609248 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-29 05:40:42.609259 | orchestrator | Sunday 29 March 2026 05:40:30 +0000 (0:00:02.158) 0:04:38.666 ********** 2026-03-29 05:40:42.609270 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:40:42.609281 | orchestrator | ok: [testbed-node-1] 2026-03-29 05:40:42.609292 | orchestrator | ok: [testbed-node-2] 2026-03-29 05:40:42.609302 | orchestrator | ok: [testbed-node-3] 2026-03-29 05:40:42.609314 | orchestrator | ok: [testbed-node-4] 2026-03-29 05:40:42.609334 | orchestrator | ok: [testbed-node-5] 2026-03-29 05:40:42.609353 | orchestrator | ok: [testbed-manager] 2026-03-29 05:40:42.609371 | orchestrator | 2026-03-29 05:40:42.609390 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-29 05:40:42.609406 | orchestrator | Sunday 29 March 2026 05:40:33 +0000 (0:00:02.516) 0:04:41.183 ********** 2026-03-29 05:40:42.609423 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.609442 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.609461 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.609479 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.609498 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.609518 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.609537 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.609557 | orchestrator | 2026-03-29 05:40:42.609575 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-29 05:40:42.609594 | orchestrator | Sunday 29 March 2026 05:40:35 +0000 (0:00:02.261) 0:04:43.444 ********** 2026-03-29 05:40:42.609613 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.609632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 05:40:42.609652 | orchestrator | skipping: [testbed-node-2] 2026-03-29 05:40:42.609671 | orchestrator | skipping: [testbed-node-3] 2026-03-29 05:40:42.609690 | orchestrator | skipping: [testbed-node-4] 2026-03-29 05:40:42.609708 | orchestrator | skipping: [testbed-node-5] 2026-03-29 05:40:42.609720 | orchestrator | skipping: [testbed-manager] 2026-03-29 05:40:42.609731 | orchestrator | 2026-03-29 05:40:42.609742 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-29 05:40:42.609761 | orchestrator | Sunday 29 March 2026 05:40:38 +0000 (0:00:02.333) 0:04:45.777 ********** 2026-03-29 05:40:42.609772 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:40:42.609783 | orchestrator | 2026-03-29 05:40:42.609794 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-29 05:40:42.609804 | orchestrator | Sunday 29 March 2026 05:40:40 +0000 (0:00:02.721) 0:04:48.499 ********** 2026-03-29 05:40:42.609815 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:40:42.609826 | orchestrator | 2026-03-29 05:40:42.609848 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-29 05:41:21.819270 | orchestrator | 2026-03-29 05:41:21.819424 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 05:41:21.819455 | orchestrator | Sunday 29 March 2026 05:40:42 +0000 (0:00:01.826) 0:04:50.325 ********** 2026-03-29 05:41:21.819468 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.819481 | orchestrator | 2026-03-29 05:41:21.819492 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 05:41:21.819504 | orchestrator | Sunday 29 March 2026 05:40:44 +0000 (0:00:01.442) 0:04:51.768 ********** 2026-03-29 05:41:21.819515 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.819526 | orchestrator | 2026-03-29 05:41:21.819537 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-29 05:41:21.819573 | orchestrator | Sunday 29 March 2026 05:40:45 +0000 (0:00:01.129) 0:04:52.897 ********** 2026-03-29 05:41:21.819587 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:41:21.819606 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:41:21.819624 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 05:41:21.819656 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 05:41:21.819675 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 05:41:21.819694 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}])  2026-03-29 05:41:21.819715 | orchestrator | 2026-03-29 05:41:21.819732 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-29 05:41:21.819750 | orchestrator | 2026-03-29 05:41:21.819767 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-29 05:41:21.819784 | orchestrator | Sunday 29 March 2026 05:40:55 +0000 (0:00:10.536) 0:05:03.434 ********** 2026-03-29 05:41:21.819799 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.819817 | orchestrator | 2026-03-29 05:41:21.819834 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-29 05:41:21.819850 | orchestrator | Sunday 29 March 2026 05:40:57 +0000 (0:00:01.453) 0:05:04.887 ********** 2026-03-29 05:41:21.819868 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.819884 | orchestrator | 2026-03-29 05:41:21.819902 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-29 05:41:21.819952 | orchestrator | Sunday 29 March 2026 05:40:58 +0000 (0:00:01.170) 0:05:06.058 ********** 2026-03-29 05:41:21.819971 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:21.819990 | orchestrator | 2026-03-29 05:41:21.820008 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-29 05:41:21.820044 | orchestrator | Sunday 29 March 2026 05:40:59 +0000 (0:00:01.108) 0:05:07.167 ********** 2026-03-29 05:41:21.820063 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820082 | orchestrator | 2026-03-29 05:41:21.820101 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 05:41:21.820135 | orchestrator | Sunday 29 March 2026 05:41:00 +0000 (0:00:01.123) 0:05:08.290 ********** 2026-03-29 05:41:21.820154 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-29 05:41:21.820172 | orchestrator | 2026-03-29 05:41:21.820191 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 05:41:21.820239 | orchestrator | Sunday 29 March 2026 05:41:01 +0000 (0:00:01.081) 0:05:09.372 ********** 2026-03-29 05:41:21.820264 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820282 | orchestrator | 2026-03-29 05:41:21.820300 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 05:41:21.820317 | orchestrator | Sunday 29 March 2026 05:41:03 +0000 (0:00:01.448) 0:05:10.821 ********** 2026-03-29 05:41:21.820334 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820352 | orchestrator | 2026-03-29 05:41:21.820385 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 05:41:21.820404 | orchestrator | Sunday 29 March 2026 05:41:04 +0000 (0:00:01.087) 0:05:11.908 ********** 2026-03-29 05:41:21.820424 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820443 | orchestrator | 2026-03-29 05:41:21.820461 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 05:41:21.820482 | orchestrator | Sunday 29 March 2026 05:41:05 +0000 (0:00:01.454) 0:05:13.363 ********** 2026-03-29 05:41:21.820502 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820520 | orchestrator | 2026-03-29 05:41:21.820538 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 05:41:21.820558 | orchestrator | Sunday 29 March 2026 05:41:06 +0000 (0:00:01.130) 0:05:14.494 ********** 2026-03-29 05:41:21.820576 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820591 | orchestrator | 2026-03-29 05:41:21.820602 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 05:41:21.820613 | orchestrator | Sunday 29 March 2026 05:41:07 +0000 (0:00:01.115) 0:05:15.609 ********** 2026-03-29 05:41:21.820624 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820635 | orchestrator | 2026-03-29 05:41:21.820646 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 05:41:21.820657 | orchestrator | Sunday 29 March 2026 05:41:09 +0000 (0:00:01.173) 0:05:16.783 ********** 2026-03-29 05:41:21.820668 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:21.820679 | orchestrator | 2026-03-29 05:41:21.820690 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 05:41:21.820700 | orchestrator | Sunday 29 March 2026 05:41:10 +0000 (0:00:01.138) 0:05:17.922 ********** 2026-03-29 05:41:21.820711 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820721 | orchestrator | 2026-03-29 05:41:21.820732 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 05:41:21.820742 | orchestrator | Sunday 29 March 2026 05:41:11 +0000 (0:00:01.141) 0:05:19.063 ********** 2026-03-29 05:41:21.820753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:41:21.820767 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:41:21.820792 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:41:21.820814 | orchestrator | 2026-03-29 05:41:21.820832 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 05:41:21.820849 | orchestrator | Sunday 29 March 2026 05:41:12 +0000 (0:00:01.659) 0:05:20.723 ********** 2026-03-29 05:41:21.820865 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:21.820883 | orchestrator | 2026-03-29 05:41:21.820900 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 05:41:21.820962 | orchestrator | Sunday 29 March 2026 05:41:14 +0000 (0:00:01.242) 0:05:21.965 ********** 2026-03-29 05:41:21.820982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:41:21.821001 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:41:21.821037 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:41:21.821050 | orchestrator | 2026-03-29 05:41:21.821061 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 05:41:21.821072 | orchestrator | Sunday 29 March 2026 05:41:17 +0000 (0:00:03.105) 0:05:25.071 ********** 2026-03-29 05:41:21.821083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:41:21.821094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:41:21.821105 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:41:21.821121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:21.821139 | orchestrator | 2026-03-29 05:41:21.821156 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 05:41:21.821172 | orchestrator | Sunday 29 March 2026 05:41:18 +0000 (0:00:01.469) 0:05:26.540 ********** 2026-03-29 05:41:21.821193 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 05:41:21.821213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 05:41:21.821241 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 05:41:21.821259 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:21.821275 | orchestrator | 2026-03-29 05:41:21.821290 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 05:41:21.821306 | orchestrator | Sunday 29 March 2026 05:41:20 +0000 (0:00:01.875) 0:05:28.416 ********** 2026-03-29 05:41:21.821340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:41.807626 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:41.807756 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:41.807773 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.807786 | orchestrator | 2026-03-29 05:41:41.807797 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 05:41:41.807808 | orchestrator | Sunday 29 March 2026 05:41:21 +0000 (0:00:01.131) 0:05:29.548 ********** 2026-03-29 05:41:41.807821 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a25d3bb21130', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 05:41:14.776638', 'end': '2026-03-29 05:41:14.826538', 'delta': '0:00:00.049900', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a25d3bb21130'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 05:41:41.807856 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a6db66d8015c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 05:41:15.355757', 'end': '2026-03-29 05:41:15.397676', 'delta': '0:00:00.041919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a6db66d8015c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 05:41:41.807867 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5a2b09aac491', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 05:41:16.159217', 'end': '2026-03-29 05:41:16.221681', 'delta': '0:00:00.062464', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5a2b09aac491'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 05:41:41.807878 | orchestrator | 2026-03-29 05:41:41.807889 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 05:41:41.807912 | orchestrator | Sunday 29 March 2026 05:41:22 +0000 (0:00:01.153) 0:05:30.701 ********** 2026-03-29 05:41:41.807922 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:41.808010 | orchestrator | 2026-03-29 05:41:41.808031 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 05:41:41.808047 | orchestrator | Sunday 29 March 2026 05:41:24 +0000 (0:00:01.573) 0:05:32.275 ********** 2026-03-29 05:41:41.808070 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808088 | orchestrator | 2026-03-29 05:41:41.808104 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 05:41:41.808121 | orchestrator | Sunday 29 March 2026 05:41:25 +0000 (0:00:01.272) 0:05:33.548 ********** 2026-03-29 05:41:41.808137 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:41.808154 | orchestrator | 2026-03-29 05:41:41.808171 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 05:41:41.808184 | orchestrator | Sunday 29 March 2026 05:41:26 +0000 (0:00:01.146) 0:05:34.695 ********** 2026-03-29 05:41:41.808215 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:41:41.808227 | orchestrator | 2026-03-29 05:41:41.808238 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:41:41.808250 | orchestrator | Sunday 29 March 2026 05:41:29 +0000 (0:00:02.161) 0:05:36.857 ********** 2026-03-29 05:41:41.808261 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:41:41.808272 | orchestrator | 2026-03-29 05:41:41.808283 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 05:41:41.808294 | orchestrator | Sunday 29 March 2026 05:41:30 +0000 (0:00:01.153) 0:05:38.011 ********** 2026-03-29 05:41:41.808303 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808313 | orchestrator | 2026-03-29 05:41:41.808322 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 05:41:41.808368 | orchestrator | Sunday 29 March 2026 05:41:31 +0000 (0:00:01.112) 0:05:39.123 ********** 2026-03-29 05:41:41.808378 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808388 | orchestrator | 2026-03-29 05:41:41.808397 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 05:41:41.808407 | orchestrator | Sunday 29 March 2026 05:41:32 +0000 (0:00:01.199) 0:05:40.322 ********** 2026-03-29 05:41:41.808416 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808427 | orchestrator | 2026-03-29 05:41:41.808436 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 05:41:41.808446 | orchestrator | Sunday 29 March 2026 05:41:33 +0000 (0:00:01.148) 0:05:41.471 ********** 2026-03-29 05:41:41.808456 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808465 | orchestrator | 2026-03-29 05:41:41.808475 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 05:41:41.808484 | orchestrator | Sunday 29 March 2026 05:41:34 +0000 (0:00:01.100) 0:05:42.571 ********** 2026-03-29 05:41:41.808493 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808503 | orchestrator | 2026-03-29 05:41:41.808512 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 05:41:41.808522 | orchestrator | Sunday 29 March 2026 05:41:35 +0000 (0:00:01.122) 0:05:43.693 ********** 2026-03-29 05:41:41.808531 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808541 | orchestrator | 2026-03-29 05:41:41.808550 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 05:41:41.808560 | orchestrator | Sunday 29 March 2026 05:41:37 +0000 (0:00:01.150) 0:05:44.844 ********** 2026-03-29 05:41:41.808569 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808579 | orchestrator | 2026-03-29 05:41:41.808588 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 05:41:41.808598 | orchestrator | Sunday 29 March 2026 05:41:38 +0000 (0:00:01.143) 0:05:45.988 ********** 2026-03-29 05:41:41.808608 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808617 | orchestrator | 2026-03-29 05:41:41.808627 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 05:41:41.808637 | orchestrator | Sunday 29 March 2026 05:41:39 +0000 (0:00:01.118) 0:05:47.107 ********** 2026-03-29 05:41:41.808647 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:41.808657 | orchestrator | 2026-03-29 05:41:41.808666 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 05:41:41.808675 | orchestrator | Sunday 29 March 2026 05:41:40 +0000 (0:00:01.147) 0:05:48.254 ********** 2026-03-29 05:41:41.808686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:41.808696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:41.808718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:41.808738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-29 05:41:41.808777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:43.054603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:43.054706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:43.054728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-29 05:41:43.054767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:43.054820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-29 05:41:43.054842 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:41:43.054898 | orchestrator | 2026-03-29 05:41:43.054921 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 05:41:43.055045 | orchestrator | Sunday 29 March 2026 05:41:41 +0000 (0:00:01.280) 0:05:49.535 ********** 2026-03-29 05:41:43.055083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055097 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055109 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-29-01-37-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055144 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:41:43.055189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:42:06.505168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8615e525', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1', 'scsi-SQEMU_QEMU_HARDDISK_8615e525-25c8-40da-bcc5-a75883081ac3-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:42:06.505303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:42:06.505339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-29 05:42:06.505349 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505359 | orchestrator | 2026-03-29 05:42:06.505368 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 05:42:06.505377 | orchestrator | Sunday 29 March 2026 05:41:43 +0000 (0:00:01.253) 0:05:50.788 ********** 2026-03-29 05:42:06.505384 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:42:06.505393 | orchestrator | 2026-03-29 05:42:06.505400 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 05:42:06.505407 | orchestrator | Sunday 29 March 2026 05:41:44 +0000 (0:00:01.488) 0:05:52.276 ********** 2026-03-29 05:42:06.505414 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:42:06.505421 | orchestrator | 2026-03-29 05:42:06.505428 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:42:06.505450 | orchestrator | Sunday 29 March 2026 05:41:45 +0000 (0:00:01.130) 0:05:53.406 ********** 2026-03-29 05:42:06.505458 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:42:06.505465 | orchestrator | 2026-03-29 05:42:06.505472 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:42:06.505479 | orchestrator | Sunday 29 March 2026 05:41:47 +0000 (0:00:01.493) 0:05:54.900 ********** 2026-03-29 05:42:06.505487 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505494 | orchestrator | 2026-03-29 05:42:06.505501 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 05:42:06.505508 | orchestrator | Sunday 29 March 2026 05:41:48 +0000 (0:00:01.114) 0:05:56.014 ********** 2026-03-29 05:42:06.505516 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505523 | orchestrator | 2026-03-29 05:42:06.505530 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 05:42:06.505537 | orchestrator | Sunday 29 March 2026 05:41:49 +0000 (0:00:01.204) 0:05:57.219 ********** 2026-03-29 05:42:06.505544 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505551 | orchestrator | 2026-03-29 05:42:06.505558 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 05:42:06.505565 | orchestrator | Sunday 29 March 2026 05:41:50 +0000 (0:00:01.155) 0:05:58.375 ********** 2026-03-29 05:42:06.505573 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:42:06.505580 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 05:42:06.505587 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 05:42:06.505594 | orchestrator | 2026-03-29 05:42:06.505601 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 05:42:06.505609 | orchestrator | Sunday 29 March 2026 05:41:52 +0000 (0:00:01.913) 0:06:00.288 ********** 2026-03-29 05:42:06.505617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:42:06.505632 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:42:06.505640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:42:06.505647 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505654 | orchestrator | 2026-03-29 05:42:06.505662 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 05:42:06.505669 | orchestrator | Sunday 29 March 2026 05:41:53 +0000 (0:00:01.161) 0:06:01.450 ********** 2026-03-29 05:42:06.505676 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505683 | orchestrator | 2026-03-29 05:42:06.505690 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 05:42:06.505697 | orchestrator | Sunday 29 March 2026 05:41:54 +0000 (0:00:01.141) 0:06:02.591 ********** 2026-03-29 05:42:06.505705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:42:06.505712 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:42:06.505720 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:42:06.505727 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:42:06.505734 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:42:06.505741 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:42:06.505748 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:42:06.505756 | orchestrator | 2026-03-29 05:42:06.505763 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 05:42:06.505770 | orchestrator | Sunday 29 March 2026 05:41:56 +0000 (0:00:02.038) 0:06:04.630 ********** 2026-03-29 05:42:06.505777 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:42:06.505784 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:42:06.505796 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:42:06.505803 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-29 05:42:06.505810 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 05:42:06.505817 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 05:42:06.505824 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 05:42:06.505832 | orchestrator | 2026-03-29 05:42:06.505839 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-29 05:42:06.505846 | orchestrator | Sunday 29 March 2026 05:41:59 +0000 (0:00:02.756) 0:06:07.386 ********** 2026-03-29 05:42:06.505853 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:42:06.505860 | orchestrator | 2026-03-29 05:42:06.505867 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-29 05:42:06.505874 | orchestrator | Sunday 29 March 2026 05:42:01 +0000 (0:00:02.248) 0:06:09.634 ********** 2026-03-29 05:42:06.505881 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505889 | orchestrator | 2026-03-29 05:42:06.505896 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-29 05:42:06.505903 | orchestrator | Sunday 29 March 2026 05:42:03 +0000 (0:00:01.200) 0:06:10.835 ********** 2026-03-29 05:42:06.505910 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:42:06.505917 | orchestrator | 2026-03-29 05:42:06.505924 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-29 05:42:06.505931 | orchestrator | Sunday 29 March 2026 05:42:04 +0000 (0:00:01.112) 0:06:11.948 ********** 2026-03-29 05:42:06.505939 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:42:06.505946 | orchestrator | 2026-03-29 05:42:06.505953 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-29 05:42:06.505999 | orchestrator | Sunday 29 March 2026 05:42:06 +0000 (0:00:02.279) 0:06:14.228 ********** 2026-03-29 05:43:07.435629 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.435759 | orchestrator | 2026-03-29 05:43:07.435775 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-29 05:43:07.435787 | orchestrator | Sunday 29 March 2026 05:42:07 +0000 (0:00:01.113) 0:06:15.342 ********** 2026-03-29 05:43:07.435798 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:43:07.435809 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 05:43:07.435820 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 05:43:07.435830 | orchestrator | 2026-03-29 05:43:07.435840 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-29 05:43:07.435850 | orchestrator | Sunday 29 March 2026 05:42:10 +0000 (0:00:02.461) 0:06:17.803 ********** 2026-03-29 05:43:07.435859 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-29 05:43:07.435869 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-29 05:43:07.435880 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-29 05:43:07.435890 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-29 05:43:07.435900 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-29 05:43:07.435910 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-29 05:43:07.435920 | orchestrator | 2026-03-29 05:43:07.435930 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-29 05:43:07.435939 | orchestrator | Sunday 29 March 2026 05:42:23 +0000 (0:00:13.838) 0:06:31.642 ********** 2026-03-29 05:43:07.435949 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:43:07.435959 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:43:07.435969 | orchestrator | 2026-03-29 05:43:07.435979 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-29 05:43:07.435988 | orchestrator | Sunday 29 March 2026 05:42:27 +0000 (0:00:03.735) 0:06:35.378 ********** 2026-03-29 05:43:07.435998 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:43:07.436008 | orchestrator | 2026-03-29 05:43:07.436017 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 05:43:07.436093 | orchestrator | Sunday 29 March 2026 05:42:30 +0000 (0:00:02.523) 0:06:37.901 ********** 2026-03-29 05:43:07.436106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-29 05:43:07.436115 | orchestrator | 2026-03-29 05:43:07.436125 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 05:43:07.436135 | orchestrator | Sunday 29 March 2026 05:42:31 +0000 (0:00:01.450) 0:06:39.352 ********** 2026-03-29 05:43:07.436144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-29 05:43:07.436154 | orchestrator | 2026-03-29 05:43:07.436166 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 05:43:07.436177 | orchestrator | Sunday 29 March 2026 05:42:33 +0000 (0:00:01.495) 0:06:40.848 ********** 2026-03-29 05:43:07.436190 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436202 | orchestrator | 2026-03-29 05:43:07.436213 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 05:43:07.436224 | orchestrator | Sunday 29 March 2026 05:42:34 +0000 (0:00:01.539) 0:06:42.388 ********** 2026-03-29 05:43:07.436236 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436247 | orchestrator | 2026-03-29 05:43:07.436273 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 05:43:07.436306 | orchestrator | Sunday 29 March 2026 05:42:35 +0000 (0:00:01.110) 0:06:43.498 ********** 2026-03-29 05:43:07.436317 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436328 | orchestrator | 2026-03-29 05:43:07.436340 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 05:43:07.436352 | orchestrator | Sunday 29 March 2026 05:42:36 +0000 (0:00:01.121) 0:06:44.620 ********** 2026-03-29 05:43:07.436363 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436374 | orchestrator | 2026-03-29 05:43:07.436385 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 05:43:07.436396 | orchestrator | Sunday 29 March 2026 05:42:37 +0000 (0:00:01.105) 0:06:45.725 ********** 2026-03-29 05:43:07.436408 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436419 | orchestrator | 2026-03-29 05:43:07.436430 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 05:43:07.436440 | orchestrator | Sunday 29 March 2026 05:42:39 +0000 (0:00:01.580) 0:06:47.306 ********** 2026-03-29 05:43:07.436451 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436462 | orchestrator | 2026-03-29 05:43:07.436474 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 05:43:07.436485 | orchestrator | Sunday 29 March 2026 05:42:40 +0000 (0:00:01.122) 0:06:48.429 ********** 2026-03-29 05:43:07.436497 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436508 | orchestrator | 2026-03-29 05:43:07.436519 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 05:43:07.436529 | orchestrator | Sunday 29 March 2026 05:42:41 +0000 (0:00:01.113) 0:06:49.543 ********** 2026-03-29 05:43:07.436538 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436548 | orchestrator | 2026-03-29 05:43:07.436557 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 05:43:07.436567 | orchestrator | Sunday 29 March 2026 05:42:43 +0000 (0:00:01.556) 0:06:51.099 ********** 2026-03-29 05:43:07.436576 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436585 | orchestrator | 2026-03-29 05:43:07.436612 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 05:43:07.436622 | orchestrator | Sunday 29 March 2026 05:42:44 +0000 (0:00:01.541) 0:06:52.641 ********** 2026-03-29 05:43:07.436631 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436641 | orchestrator | 2026-03-29 05:43:07.436650 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 05:43:07.436660 | orchestrator | Sunday 29 March 2026 05:42:46 +0000 (0:00:01.171) 0:06:53.812 ********** 2026-03-29 05:43:07.436669 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436679 | orchestrator | 2026-03-29 05:43:07.436688 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 05:43:07.436698 | orchestrator | Sunday 29 March 2026 05:42:47 +0000 (0:00:01.149) 0:06:54.962 ********** 2026-03-29 05:43:07.436707 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436716 | orchestrator | 2026-03-29 05:43:07.436726 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 05:43:07.436735 | orchestrator | Sunday 29 March 2026 05:42:48 +0000 (0:00:01.115) 0:06:56.077 ********** 2026-03-29 05:43:07.436745 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436754 | orchestrator | 2026-03-29 05:43:07.436764 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 05:43:07.436773 | orchestrator | Sunday 29 March 2026 05:42:49 +0000 (0:00:01.113) 0:06:57.192 ********** 2026-03-29 05:43:07.436782 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436792 | orchestrator | 2026-03-29 05:43:07.436801 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 05:43:07.436811 | orchestrator | Sunday 29 March 2026 05:42:50 +0000 (0:00:01.153) 0:06:58.345 ********** 2026-03-29 05:43:07.436820 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436830 | orchestrator | 2026-03-29 05:43:07.436839 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 05:43:07.436856 | orchestrator | Sunday 29 March 2026 05:42:51 +0000 (0:00:01.125) 0:06:59.470 ********** 2026-03-29 05:43:07.436866 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.436875 | orchestrator | 2026-03-29 05:43:07.436884 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 05:43:07.436894 | orchestrator | Sunday 29 March 2026 05:42:52 +0000 (0:00:01.103) 0:07:00.573 ********** 2026-03-29 05:43:07.436903 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436913 | orchestrator | 2026-03-29 05:43:07.436922 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 05:43:07.436932 | orchestrator | Sunday 29 March 2026 05:42:53 +0000 (0:00:01.156) 0:07:01.730 ********** 2026-03-29 05:43:07.436941 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436951 | orchestrator | 2026-03-29 05:43:07.436960 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 05:43:07.436970 | orchestrator | Sunday 29 March 2026 05:42:55 +0000 (0:00:01.172) 0:07:02.903 ********** 2026-03-29 05:43:07.436980 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:07.436989 | orchestrator | 2026-03-29 05:43:07.436999 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-29 05:43:07.437008 | orchestrator | Sunday 29 March 2026 05:42:56 +0000 (0:00:01.134) 0:07:04.037 ********** 2026-03-29 05:43:07.437017 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437050 | orchestrator | 2026-03-29 05:43:07.437067 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-29 05:43:07.437085 | orchestrator | Sunday 29 March 2026 05:42:57 +0000 (0:00:01.103) 0:07:05.141 ********** 2026-03-29 05:43:07.437103 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437120 | orchestrator | 2026-03-29 05:43:07.437132 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-29 05:43:07.437142 | orchestrator | Sunday 29 March 2026 05:42:58 +0000 (0:00:01.123) 0:07:06.265 ********** 2026-03-29 05:43:07.437152 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437161 | orchestrator | 2026-03-29 05:43:07.437171 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-29 05:43:07.437186 | orchestrator | Sunday 29 March 2026 05:42:59 +0000 (0:00:01.124) 0:07:07.389 ********** 2026-03-29 05:43:07.437196 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437206 | orchestrator | 2026-03-29 05:43:07.437215 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-29 05:43:07.437225 | orchestrator | Sunday 29 March 2026 05:43:00 +0000 (0:00:01.119) 0:07:08.509 ********** 2026-03-29 05:43:07.437234 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437244 | orchestrator | 2026-03-29 05:43:07.437253 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-29 05:43:07.437262 | orchestrator | Sunday 29 March 2026 05:43:01 +0000 (0:00:01.110) 0:07:09.620 ********** 2026-03-29 05:43:07.437272 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437281 | orchestrator | 2026-03-29 05:43:07.437291 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-29 05:43:07.437300 | orchestrator | Sunday 29 March 2026 05:43:02 +0000 (0:00:01.116) 0:07:10.736 ********** 2026-03-29 05:43:07.437310 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437319 | orchestrator | 2026-03-29 05:43:07.437329 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-29 05:43:07.437339 | orchestrator | Sunday 29 March 2026 05:43:04 +0000 (0:00:01.087) 0:07:11.824 ********** 2026-03-29 05:43:07.437348 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437358 | orchestrator | 2026-03-29 05:43:07.437368 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-29 05:43:07.437377 | orchestrator | Sunday 29 March 2026 05:43:05 +0000 (0:00:01.091) 0:07:12.915 ********** 2026-03-29 05:43:07.437387 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437396 | orchestrator | 2026-03-29 05:43:07.437406 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-29 05:43:07.437423 | orchestrator | Sunday 29 March 2026 05:43:06 +0000 (0:00:01.125) 0:07:14.041 ********** 2026-03-29 05:43:07.437433 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:07.437442 | orchestrator | 2026-03-29 05:43:07.437452 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-29 05:43:07.437462 | orchestrator | Sunday 29 March 2026 05:43:07 +0000 (0:00:01.117) 0:07:15.158 ********** 2026-03-29 05:43:58.202124 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202302 | orchestrator | 2026-03-29 05:43:58.202334 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-29 05:43:58.202348 | orchestrator | Sunday 29 March 2026 05:43:08 +0000 (0:00:01.136) 0:07:16.295 ********** 2026-03-29 05:43:58.202360 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202371 | orchestrator | 2026-03-29 05:43:58.202383 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-29 05:43:58.202394 | orchestrator | Sunday 29 March 2026 05:43:09 +0000 (0:00:01.101) 0:07:17.397 ********** 2026-03-29 05:43:58.202404 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.202416 | orchestrator | 2026-03-29 05:43:58.202427 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-29 05:43:58.202438 | orchestrator | Sunday 29 March 2026 05:43:11 +0000 (0:00:02.037) 0:07:19.434 ********** 2026-03-29 05:43:58.202449 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.202460 | orchestrator | 2026-03-29 05:43:58.202470 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-29 05:43:58.202481 | orchestrator | Sunday 29 March 2026 05:43:14 +0000 (0:00:02.463) 0:07:21.898 ********** 2026-03-29 05:43:58.202492 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-29 05:43:58.202503 | orchestrator | 2026-03-29 05:43:58.202514 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-29 05:43:58.202525 | orchestrator | Sunday 29 March 2026 05:43:15 +0000 (0:00:01.457) 0:07:23.355 ********** 2026-03-29 05:43:58.202536 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202546 | orchestrator | 2026-03-29 05:43:58.202557 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-29 05:43:58.202569 | orchestrator | Sunday 29 March 2026 05:43:16 +0000 (0:00:01.122) 0:07:24.477 ********** 2026-03-29 05:43:58.202582 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202595 | orchestrator | 2026-03-29 05:43:58.202607 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-29 05:43:58.202619 | orchestrator | Sunday 29 March 2026 05:43:17 +0000 (0:00:01.144) 0:07:25.622 ********** 2026-03-29 05:43:58.202632 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 05:43:58.202645 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 05:43:58.202658 | orchestrator | 2026-03-29 05:43:58.202671 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-29 05:43:58.202683 | orchestrator | Sunday 29 March 2026 05:43:19 +0000 (0:00:01.780) 0:07:27.403 ********** 2026-03-29 05:43:58.202695 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.202707 | orchestrator | 2026-03-29 05:43:58.202720 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-29 05:43:58.202732 | orchestrator | Sunday 29 March 2026 05:43:21 +0000 (0:00:01.630) 0:07:29.033 ********** 2026-03-29 05:43:58.202745 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202757 | orchestrator | 2026-03-29 05:43:58.202770 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-29 05:43:58.202782 | orchestrator | Sunday 29 March 2026 05:43:22 +0000 (0:00:01.135) 0:07:30.169 ********** 2026-03-29 05:43:58.202794 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202807 | orchestrator | 2026-03-29 05:43:58.202819 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-29 05:43:58.202858 | orchestrator | Sunday 29 March 2026 05:43:23 +0000 (0:00:01.102) 0:07:31.271 ********** 2026-03-29 05:43:58.202871 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.202884 | orchestrator | 2026-03-29 05:43:58.202896 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-29 05:43:58.202909 | orchestrator | Sunday 29 March 2026 05:43:24 +0000 (0:00:01.165) 0:07:32.437 ********** 2026-03-29 05:43:58.202936 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-29 05:43:58.202947 | orchestrator | 2026-03-29 05:43:58.202958 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-29 05:43:58.202970 | orchestrator | Sunday 29 March 2026 05:43:26 +0000 (0:00:01.453) 0:07:33.891 ********** 2026-03-29 05:43:58.202980 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.202991 | orchestrator | 2026-03-29 05:43:58.203002 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-29 05:43:58.203014 | orchestrator | Sunday 29 March 2026 05:43:27 +0000 (0:00:01.699) 0:07:35.591 ********** 2026-03-29 05:43:58.203025 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 05:43:58.203039 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 05:43:58.203058 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 05:43:58.203076 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203151 | orchestrator | 2026-03-29 05:43:58.203167 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-29 05:43:58.203185 | orchestrator | Sunday 29 March 2026 05:43:28 +0000 (0:00:01.139) 0:07:36.730 ********** 2026-03-29 05:43:58.203204 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203222 | orchestrator | 2026-03-29 05:43:58.203241 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-29 05:43:58.203260 | orchestrator | Sunday 29 March 2026 05:43:30 +0000 (0:00:01.094) 0:07:37.825 ********** 2026-03-29 05:43:58.203278 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203295 | orchestrator | 2026-03-29 05:43:58.203313 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-29 05:43:58.203333 | orchestrator | Sunday 29 March 2026 05:43:31 +0000 (0:00:01.177) 0:07:39.003 ********** 2026-03-29 05:43:58.203350 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203367 | orchestrator | 2026-03-29 05:43:58.203384 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-29 05:43:58.203427 | orchestrator | Sunday 29 March 2026 05:43:32 +0000 (0:00:01.133) 0:07:40.137 ********** 2026-03-29 05:43:58.203447 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203465 | orchestrator | 2026-03-29 05:43:58.203483 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-29 05:43:58.203502 | orchestrator | Sunday 29 March 2026 05:43:33 +0000 (0:00:01.117) 0:07:41.254 ********** 2026-03-29 05:43:58.203522 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203540 | orchestrator | 2026-03-29 05:43:58.203558 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-29 05:43:58.203575 | orchestrator | Sunday 29 March 2026 05:43:34 +0000 (0:00:01.158) 0:07:42.413 ********** 2026-03-29 05:43:58.203587 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.203597 | orchestrator | 2026-03-29 05:43:58.203608 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-29 05:43:58.203619 | orchestrator | Sunday 29 March 2026 05:43:37 +0000 (0:00:02.575) 0:07:44.989 ********** 2026-03-29 05:43:58.203630 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.203642 | orchestrator | 2026-03-29 05:43:58.203661 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-29 05:43:58.203672 | orchestrator | Sunday 29 March 2026 05:43:38 +0000 (0:00:01.183) 0:07:46.172 ********** 2026-03-29 05:43:58.203683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-29 05:43:58.203708 | orchestrator | 2026-03-29 05:43:58.203719 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-29 05:43:58.203729 | orchestrator | Sunday 29 March 2026 05:43:39 +0000 (0:00:01.473) 0:07:47.646 ********** 2026-03-29 05:43:58.203740 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203750 | orchestrator | 2026-03-29 05:43:58.203761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-29 05:43:58.203772 | orchestrator | Sunday 29 March 2026 05:43:41 +0000 (0:00:01.147) 0:07:48.793 ********** 2026-03-29 05:43:58.203783 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203793 | orchestrator | 2026-03-29 05:43:58.203804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-29 05:43:58.203814 | orchestrator | Sunday 29 March 2026 05:43:42 +0000 (0:00:01.136) 0:07:49.930 ********** 2026-03-29 05:43:58.203825 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203836 | orchestrator | 2026-03-29 05:43:58.203846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-29 05:43:58.203857 | orchestrator | Sunday 29 March 2026 05:43:43 +0000 (0:00:01.110) 0:07:51.040 ********** 2026-03-29 05:43:58.203867 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203878 | orchestrator | 2026-03-29 05:43:58.203888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-29 05:43:58.203899 | orchestrator | Sunday 29 March 2026 05:43:44 +0000 (0:00:01.128) 0:07:52.169 ********** 2026-03-29 05:43:58.203910 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203920 | orchestrator | 2026-03-29 05:43:58.203954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-29 05:43:58.203965 | orchestrator | Sunday 29 March 2026 05:43:45 +0000 (0:00:01.117) 0:07:53.286 ********** 2026-03-29 05:43:58.203975 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.203986 | orchestrator | 2026-03-29 05:43:58.203996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-29 05:43:58.204007 | orchestrator | Sunday 29 March 2026 05:43:46 +0000 (0:00:01.030) 0:07:54.317 ********** 2026-03-29 05:43:58.204018 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.204028 | orchestrator | 2026-03-29 05:43:58.204039 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-29 05:43:58.204050 | orchestrator | Sunday 29 March 2026 05:43:47 +0000 (0:00:01.092) 0:07:55.410 ********** 2026-03-29 05:43:58.204060 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:43:58.204071 | orchestrator | 2026-03-29 05:43:58.204108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-29 05:43:58.204130 | orchestrator | Sunday 29 March 2026 05:43:48 +0000 (0:00:01.108) 0:07:56.519 ********** 2026-03-29 05:43:58.204141 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:43:58.204152 | orchestrator | 2026-03-29 05:43:58.204163 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-29 05:43:58.204173 | orchestrator | Sunday 29 March 2026 05:43:49 +0000 (0:00:01.081) 0:07:57.600 ********** 2026-03-29 05:43:58.204184 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-29 05:43:58.204195 | orchestrator | 2026-03-29 05:43:58.204205 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-29 05:43:58.204216 | orchestrator | Sunday 29 March 2026 05:43:51 +0000 (0:00:01.345) 0:07:58.946 ********** 2026-03-29 05:43:58.204233 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-29 05:43:58.204253 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-29 05:43:58.204272 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-29 05:43:58.204291 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-29 05:43:58.204310 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-29 05:43:58.204328 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-29 05:43:58.204345 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-29 05:43:58.204375 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-29 05:43:58.204393 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 05:43:58.204411 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 05:43:58.204429 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 05:43:58.204449 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 05:43:58.204467 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 05:43:58.204487 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 05:43:58.204520 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-29 05:44:45.508127 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-29 05:44:45.508253 | orchestrator | 2026-03-29 05:44:45.508265 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-29 05:44:45.508274 | orchestrator | Sunday 29 March 2026 05:43:58 +0000 (0:00:06.974) 0:08:05.920 ********** 2026-03-29 05:44:45.508281 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508288 | orchestrator | 2026-03-29 05:44:45.508295 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-29 05:44:45.508302 | orchestrator | Sunday 29 March 2026 05:43:59 +0000 (0:00:01.105) 0:08:07.026 ********** 2026-03-29 05:44:45.508308 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508314 | orchestrator | 2026-03-29 05:44:45.508321 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-29 05:44:45.508327 | orchestrator | Sunday 29 March 2026 05:44:00 +0000 (0:00:01.108) 0:08:08.135 ********** 2026-03-29 05:44:45.508333 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508340 | orchestrator | 2026-03-29 05:44:45.508346 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-29 05:44:45.508353 | orchestrator | Sunday 29 March 2026 05:44:01 +0000 (0:00:01.094) 0:08:09.230 ********** 2026-03-29 05:44:45.508360 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508366 | orchestrator | 2026-03-29 05:44:45.508373 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-29 05:44:45.508379 | orchestrator | Sunday 29 March 2026 05:44:02 +0000 (0:00:01.093) 0:08:10.323 ********** 2026-03-29 05:44:45.508386 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508393 | orchestrator | 2026-03-29 05:44:45.508400 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-29 05:44:45.508407 | orchestrator | Sunday 29 March 2026 05:44:03 +0000 (0:00:01.104) 0:08:11.428 ********** 2026-03-29 05:44:45.508413 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508419 | orchestrator | 2026-03-29 05:44:45.508426 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-29 05:44:45.508434 | orchestrator | Sunday 29 March 2026 05:44:04 +0000 (0:00:01.098) 0:08:12.527 ********** 2026-03-29 05:44:45.508440 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508447 | orchestrator | 2026-03-29 05:44:45.508453 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-29 05:44:45.508460 | orchestrator | Sunday 29 March 2026 05:44:05 +0000 (0:00:01.165) 0:08:13.692 ********** 2026-03-29 05:44:45.508467 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508474 | orchestrator | 2026-03-29 05:44:45.508481 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-29 05:44:45.508488 | orchestrator | Sunday 29 March 2026 05:44:07 +0000 (0:00:01.130) 0:08:14.822 ********** 2026-03-29 05:44:45.508496 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508503 | orchestrator | 2026-03-29 05:44:45.508510 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-29 05:44:45.508518 | orchestrator | Sunday 29 March 2026 05:44:08 +0000 (0:00:01.114) 0:08:15.937 ********** 2026-03-29 05:44:45.508525 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508554 | orchestrator | 2026-03-29 05:44:45.508561 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-29 05:44:45.508568 | orchestrator | Sunday 29 March 2026 05:44:09 +0000 (0:00:01.094) 0:08:17.031 ********** 2026-03-29 05:44:45.508573 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508579 | orchestrator | 2026-03-29 05:44:45.508585 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-29 05:44:45.508591 | orchestrator | Sunday 29 March 2026 05:44:10 +0000 (0:00:01.112) 0:08:18.144 ********** 2026-03-29 05:44:45.508597 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508603 | orchestrator | 2026-03-29 05:44:45.508621 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-29 05:44:45.508627 | orchestrator | Sunday 29 March 2026 05:44:11 +0000 (0:00:01.165) 0:08:19.310 ********** 2026-03-29 05:44:45.508633 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508639 | orchestrator | 2026-03-29 05:44:45.508644 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-29 05:44:45.508650 | orchestrator | Sunday 29 March 2026 05:44:12 +0000 (0:00:01.205) 0:08:20.515 ********** 2026-03-29 05:44:45.508655 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508661 | orchestrator | 2026-03-29 05:44:45.508667 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-29 05:44:45.508673 | orchestrator | Sunday 29 March 2026 05:44:13 +0000 (0:00:01.132) 0:08:21.648 ********** 2026-03-29 05:44:45.508678 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508684 | orchestrator | 2026-03-29 05:44:45.508690 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-29 05:44:45.508696 | orchestrator | Sunday 29 March 2026 05:44:15 +0000 (0:00:01.243) 0:08:22.891 ********** 2026-03-29 05:44:45.508702 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508708 | orchestrator | 2026-03-29 05:44:45.508714 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-29 05:44:45.508720 | orchestrator | Sunday 29 March 2026 05:44:16 +0000 (0:00:01.101) 0:08:23.993 ********** 2026-03-29 05:44:45.508726 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508733 | orchestrator | 2026-03-29 05:44:45.508739 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 05:44:45.508747 | orchestrator | Sunday 29 March 2026 05:44:17 +0000 (0:00:01.099) 0:08:25.092 ********** 2026-03-29 05:44:45.508753 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508759 | orchestrator | 2026-03-29 05:44:45.508764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 05:44:45.508771 | orchestrator | Sunday 29 March 2026 05:44:18 +0000 (0:00:01.145) 0:08:26.238 ********** 2026-03-29 05:44:45.508777 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508783 | orchestrator | 2026-03-29 05:44:45.508804 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 05:44:45.508811 | orchestrator | Sunday 29 March 2026 05:44:19 +0000 (0:00:01.127) 0:08:27.366 ********** 2026-03-29 05:44:45.508817 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508823 | orchestrator | 2026-03-29 05:44:45.508828 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 05:44:45.508834 | orchestrator | Sunday 29 March 2026 05:44:20 +0000 (0:00:01.120) 0:08:28.486 ********** 2026-03-29 05:44:45.508841 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508847 | orchestrator | 2026-03-29 05:44:45.508854 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 05:44:45.508861 | orchestrator | Sunday 29 March 2026 05:44:21 +0000 (0:00:01.098) 0:08:29.585 ********** 2026-03-29 05:44:45.508867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:44:45.508874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:44:45.508881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:44:45.508897 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508904 | orchestrator | 2026-03-29 05:44:45.508910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 05:44:45.508917 | orchestrator | Sunday 29 March 2026 05:44:23 +0000 (0:00:01.668) 0:08:31.254 ********** 2026-03-29 05:44:45.508923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:44:45.508929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:44:45.508935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:44:45.508941 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508947 | orchestrator | 2026-03-29 05:44:45.508953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 05:44:45.508959 | orchestrator | Sunday 29 March 2026 05:44:24 +0000 (0:00:01.400) 0:08:32.654 ********** 2026-03-29 05:44:45.508965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 05:44:45.508971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 05:44:45.508976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 05:44:45.508982 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.508988 | orchestrator | 2026-03-29 05:44:45.508994 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 05:44:45.509000 | orchestrator | Sunday 29 March 2026 05:44:26 +0000 (0:00:01.518) 0:08:34.173 ********** 2026-03-29 05:44:45.509006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.509012 | orchestrator | 2026-03-29 05:44:45.509018 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 05:44:45.509024 | orchestrator | Sunday 29 March 2026 05:44:27 +0000 (0:00:01.098) 0:08:35.272 ********** 2026-03-29 05:44:45.509031 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-29 05:44:45.509037 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.509042 | orchestrator | 2026-03-29 05:44:45.509048 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-29 05:44:45.509053 | orchestrator | Sunday 29 March 2026 05:44:28 +0000 (0:00:01.385) 0:08:36.657 ********** 2026-03-29 05:44:45.509059 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509065 | orchestrator | 2026-03-29 05:44:45.509071 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-29 05:44:45.509077 | orchestrator | Sunday 29 March 2026 05:44:30 +0000 (0:00:01.896) 0:08:38.554 ********** 2026-03-29 05:44:45.509082 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509088 | orchestrator | 2026-03-29 05:44:45.509093 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-29 05:44:45.509099 | orchestrator | Sunday 29 March 2026 05:44:31 +0000 (0:00:01.140) 0:08:39.695 ********** 2026-03-29 05:44:45.509105 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-29 05:44:45.509111 | orchestrator | 2026-03-29 05:44:45.509121 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-29 05:44:45.509127 | orchestrator | Sunday 29 March 2026 05:44:33 +0000 (0:00:01.480) 0:08:41.175 ********** 2026-03-29 05:44:45.509151 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-29 05:44:45.509157 | orchestrator | 2026-03-29 05:44:45.509163 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-29 05:44:45.509168 | orchestrator | Sunday 29 March 2026 05:44:36 +0000 (0:00:03.494) 0:08:44.670 ********** 2026-03-29 05:44:45.509174 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:44:45.509179 | orchestrator | 2026-03-29 05:44:45.509185 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-29 05:44:45.509190 | orchestrator | Sunday 29 March 2026 05:44:38 +0000 (0:00:01.143) 0:08:45.813 ********** 2026-03-29 05:44:45.509196 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509201 | orchestrator | 2026-03-29 05:44:45.509214 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-29 05:44:45.509225 | orchestrator | Sunday 29 March 2026 05:44:39 +0000 (0:00:01.122) 0:08:46.936 ********** 2026-03-29 05:44:45.509231 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509237 | orchestrator | 2026-03-29 05:44:45.509242 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-29 05:44:45.509248 | orchestrator | Sunday 29 March 2026 05:44:40 +0000 (0:00:01.163) 0:08:48.100 ********** 2026-03-29 05:44:45.509260 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:44:45.509265 | orchestrator | 2026-03-29 05:44:45.509270 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-29 05:44:45.509276 | orchestrator | Sunday 29 March 2026 05:44:42 +0000 (0:00:02.092) 0:08:50.192 ********** 2026-03-29 05:44:45.509281 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509286 | orchestrator | 2026-03-29 05:44:45.509292 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-29 05:44:45.509297 | orchestrator | Sunday 29 March 2026 05:44:44 +0000 (0:00:01.577) 0:08:51.770 ********** 2026-03-29 05:44:45.509303 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:44:45.509309 | orchestrator | 2026-03-29 05:44:45.509319 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-29 05:45:42.061949 | orchestrator | Sunday 29 March 2026 05:44:45 +0000 (0:00:01.465) 0:08:53.236 ********** 2026-03-29 05:45:42.062186 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062288 | orchestrator | 2026-03-29 05:45:42.062311 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-29 05:45:42.062324 | orchestrator | Sunday 29 March 2026 05:44:46 +0000 (0:00:01.440) 0:08:54.676 ********** 2026-03-29 05:45:42.062335 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062347 | orchestrator | 2026-03-29 05:45:42.062358 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-29 05:45:42.062369 | orchestrator | Sunday 29 March 2026 05:44:48 +0000 (0:00:01.677) 0:08:56.354 ********** 2026-03-29 05:45:42.062380 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062391 | orchestrator | 2026-03-29 05:45:42.062403 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-29 05:45:42.062414 | orchestrator | Sunday 29 March 2026 05:44:50 +0000 (0:00:01.671) 0:08:58.026 ********** 2026-03-29 05:45:42.062425 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 05:45:42.062437 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 05:45:42.062448 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 05:45:42.062462 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-29 05:45:42.062476 | orchestrator | 2026-03-29 05:45:42.062489 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-29 05:45:42.062502 | orchestrator | Sunday 29 March 2026 05:44:54 +0000 (0:00:03.970) 0:09:01.997 ********** 2026-03-29 05:45:42.062515 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:45:42.062528 | orchestrator | 2026-03-29 05:45:42.062542 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-29 05:45:42.062554 | orchestrator | Sunday 29 March 2026 05:44:56 +0000 (0:00:02.048) 0:09:04.045 ********** 2026-03-29 05:45:42.062568 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062582 | orchestrator | 2026-03-29 05:45:42.062596 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-29 05:45:42.062609 | orchestrator | Sunday 29 March 2026 05:44:57 +0000 (0:00:01.124) 0:09:05.170 ********** 2026-03-29 05:45:42.062622 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062635 | orchestrator | 2026-03-29 05:45:42.062648 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-29 05:45:42.062660 | orchestrator | Sunday 29 March 2026 05:44:58 +0000 (0:00:01.157) 0:09:06.328 ********** 2026-03-29 05:45:42.062674 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062687 | orchestrator | 2026-03-29 05:45:42.062700 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-29 05:45:42.062713 | orchestrator | Sunday 29 March 2026 05:45:00 +0000 (0:00:02.041) 0:09:08.369 ********** 2026-03-29 05:45:42.062750 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.062764 | orchestrator | 2026-03-29 05:45:42.062777 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-29 05:45:42.062790 | orchestrator | Sunday 29 March 2026 05:45:02 +0000 (0:00:01.465) 0:09:09.834 ********** 2026-03-29 05:45:42.062803 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:45:42.062816 | orchestrator | 2026-03-29 05:45:42.062828 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-29 05:45:42.062841 | orchestrator | Sunday 29 March 2026 05:45:03 +0000 (0:00:01.095) 0:09:10.930 ********** 2026-03-29 05:45:42.062859 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-29 05:45:42.062877 | orchestrator | 2026-03-29 05:45:42.062897 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-29 05:45:42.062915 | orchestrator | Sunday 29 March 2026 05:45:04 +0000 (0:00:01.433) 0:09:12.364 ********** 2026-03-29 05:45:42.062933 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:45:42.062945 | orchestrator | 2026-03-29 05:45:42.062969 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-29 05:45:42.062981 | orchestrator | Sunday 29 March 2026 05:45:05 +0000 (0:00:01.109) 0:09:13.474 ********** 2026-03-29 05:45:42.062992 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:45:42.063003 | orchestrator | 2026-03-29 05:45:42.063014 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-29 05:45:42.063025 | orchestrator | Sunday 29 March 2026 05:45:06 +0000 (0:00:01.064) 0:09:14.538 ********** 2026-03-29 05:45:42.063036 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-29 05:45:42.063047 | orchestrator | 2026-03-29 05:45:42.063057 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-29 05:45:42.063068 | orchestrator | Sunday 29 March 2026 05:45:07 +0000 (0:00:01.162) 0:09:15.701 ********** 2026-03-29 05:45:42.063079 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.063090 | orchestrator | 2026-03-29 05:45:42.063101 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-29 05:45:42.063111 | orchestrator | Sunday 29 March 2026 05:45:09 +0000 (0:00:01.965) 0:09:17.667 ********** 2026-03-29 05:45:42.063122 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.063133 | orchestrator | 2026-03-29 05:45:42.063144 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-29 05:45:42.063155 | orchestrator | Sunday 29 March 2026 05:45:11 +0000 (0:00:01.937) 0:09:19.605 ********** 2026-03-29 05:45:42.063165 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.063176 | orchestrator | 2026-03-29 05:45:42.063187 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-29 05:45:42.063230 | orchestrator | Sunday 29 March 2026 05:45:14 +0000 (0:00:02.484) 0:09:22.089 ********** 2026-03-29 05:45:42.063242 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:45:42.063253 | orchestrator | 2026-03-29 05:45:42.063264 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-29 05:45:42.063274 | orchestrator | Sunday 29 March 2026 05:45:17 +0000 (0:00:03.362) 0:09:25.452 ********** 2026-03-29 05:45:42.063285 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-29 05:45:42.063296 | orchestrator | 2026-03-29 05:45:42.063328 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-29 05:45:42.063340 | orchestrator | Sunday 29 March 2026 05:45:19 +0000 (0:00:01.639) 0:09:27.091 ********** 2026-03-29 05:45:42.063351 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.063362 | orchestrator | 2026-03-29 05:45:42.063373 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-29 05:45:42.063384 | orchestrator | Sunday 29 March 2026 05:45:21 +0000 (0:00:02.318) 0:09:29.410 ********** 2026-03-29 05:45:42.063395 | orchestrator | ok: [testbed-node-0] 2026-03-29 05:45:42.063406 | orchestrator | 2026-03-29 05:45:42.063417 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-29 05:45:42.063442 | orchestrator | Sunday 29 March 2026 05:45:24 +0000 (0:00:02.977) 0:09:32.388 ********** 2026-03-29 05:45:42.063461 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:45:42.063479 | orchestrator | 2026-03-29 05:45:42.063498 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-29 05:45:42.063515 | orchestrator | Sunday 29 March 2026 05:45:25 +0000 (0:00:01.131) 0:09:33.520 ********** 2026-03-29 05:45:42.063537 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:45:42.063561 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 05:45:42.063581 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 05:45:42.063601 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 05:45:42.063620 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 05:45:42.063648 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66ae379f37ac7a9bba0cf3574581cd67a64f849e'}])  2026-03-29 05:45:42.063669 | orchestrator | 2026-03-29 05:45:42.063689 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-29 05:45:42.063708 | orchestrator | Sunday 29 March 2026 05:45:36 +0000 (0:00:10.249) 0:09:43.770 ********** 2026-03-29 05:45:42.063725 | orchestrator | changed: [testbed-node-0] 2026-03-29 05:45:42.063745 | orchestrator | 2026-03-29 05:45:42.063763 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 05:45:42.063780 | orchestrator | Sunday 29 March 2026 05:45:38 +0000 (0:00:02.515) 0:09:46.285 ********** 2026-03-29 05:45:42.063799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 05:45:42.063820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 05:45:42.063839 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 05:45:42.063858 | orchestrator | 2026-03-29 05:45:42.063876 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 05:45:42.063887 | orchestrator | Sunday 29 March 2026 05:45:40 +0000 (0:00:02.110) 0:09:48.396 ********** 2026-03-29 05:45:42.063898 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 05:45:42.063919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 05:45:42.063930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 05:45:42.063940 | orchestrator | skipping: [testbed-node-0] 2026-03-29 05:45:42.063951 | orchestrator | 2026-03-29 05:45:42.063962 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-29 05:45:42.063982 | orchestrator | Sunday 29 March 2026 05:45:42 +0000 (0:00:01.394) 0:09:49.791 ********** 2026-03-29 06:17:03.462317 | orchestrator | skipping: [testbed-node-0] 2026-03-29 06:17:03.462439 | orchestrator | 2026-03-29 06:17:03.462455 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-29 06:17:03.462468 | orchestrator | Sunday 29 March 2026 05:45:43 +0000 (0:00:01.116) 0:09:50.907 ********** 2026-03-29 06:17:03.462481 | orchestrator | 2026-03-29 06:17:03.462492 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462504 | orchestrator | 2026-03-29 06:17:03.462515 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462526 | orchestrator | 2026-03-29 06:17:03.462537 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462548 | orchestrator | 2026-03-29 06:17:03.462559 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462570 | orchestrator | 2026-03-29 06:17:03.462581 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462592 | orchestrator | 2026-03-29 06:17:03.462603 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462614 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-03-29 06:17:03.462627 | orchestrator | 2026-03-29 06:17:03.462638 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462649 | orchestrator | 2026-03-29 06:17:03.462660 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462671 | orchestrator | 2026-03-29 06:17:03.462682 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462693 | orchestrator | 2026-03-29 06:17:03.462704 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462715 | orchestrator | 2026-03-29 06:17:03.462725 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462736 | orchestrator | 2026-03-29 06:17:03.462747 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462758 | orchestrator | 2026-03-29 06:17:03.462769 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462780 | orchestrator | 2026-03-29 06:17:03.462791 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462801 | orchestrator | 2026-03-29 06:17:03.462812 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462823 | orchestrator | 2026-03-29 06:17:03.462834 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462861 | orchestrator | 2026-03-29 06:17:03.462872 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462883 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-03-29 06:17:03.462894 | orchestrator | 2026-03-29 06:17:03.462930 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.462995 | orchestrator | 2026-03-29 06:17:03.463085 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463099 | orchestrator | 2026-03-29 06:17:03.463127 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463146 | orchestrator | 2026-03-29 06:17:03.463165 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463191 | orchestrator | 2026-03-29 06:17:03.463212 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463231 | orchestrator | 2026-03-29 06:17:03.463249 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463268 | orchestrator | 2026-03-29 06:17:03.463285 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463303 | orchestrator | 2026-03-29 06:17:03.463321 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463338 | orchestrator | 2026-03-29 06:17:03.463358 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463376 | orchestrator | 2026-03-29 06:17:03.463394 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463414 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-03-29 06:17:03.463433 | orchestrator | 2026-03-29 06:17:03.463453 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463472 | orchestrator | 2026-03-29 06:17:03.463490 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463509 | orchestrator | 2026-03-29 06:17:03.463528 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463547 | orchestrator | 2026-03-29 06:17:03.463566 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463585 | orchestrator | 2026-03-29 06:17:03.463621 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463633 | orchestrator | 2026-03-29 06:17:03.463644 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463655 | orchestrator | 2026-03-29 06:17:03.463666 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463676 | orchestrator | 2026-03-29 06:17:03.463687 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463698 | orchestrator | 2026-03-29 06:17:03.463709 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463720 | orchestrator | 2026-03-29 06:17:03.463731 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463741 | orchestrator | 2026-03-29 06:17:03.463752 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-03-29 06:17:03.463774 | orchestrator | 2026-03-29 06:17:03.463785 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463795 | orchestrator | 2026-03-29 06:17:03.463806 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463832 | orchestrator | 2026-03-29 06:17:03.463843 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463853 | orchestrator | 2026-03-29 06:17:03.463864 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463875 | orchestrator | 2026-03-29 06:17:03.463886 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463898 | orchestrator | 2026-03-29 06:17:03.463917 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.463943 | orchestrator | 2026-03-29 06:17:03.463992 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464011 | orchestrator | 2026-03-29 06:17:03.464029 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464046 | orchestrator | 2026-03-29 06:17:03.464062 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464081 | orchestrator | 2026-03-29 06:17:03.464099 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464119 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-03-29 06:17:03.464138 | orchestrator | 2026-03-29 06:17:03.464156 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464175 | orchestrator | 2026-03-29 06:17:03.464189 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464199 | orchestrator | 2026-03-29 06:17:03.464210 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464221 | orchestrator | 2026-03-29 06:17:03.464232 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464243 | orchestrator | 2026-03-29 06:17:03.464254 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464264 | orchestrator | 2026-03-29 06:17:03.464286 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464297 | orchestrator | 2026-03-29 06:17:03.464308 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464318 | orchestrator | 2026-03-29 06:17:03.464329 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464340 | orchestrator | 2026-03-29 06:17:03.464350 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464361 | orchestrator | 2026-03-29 06:17:03.464372 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464383 | orchestrator | 2026-03-29 06:17:03.464393 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-03-29 06:17:03.464408 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.260370", "end": "2026-03-29 06:17:01.946065", "msg": "non-zero return code", "rc": 1, "start": "2026-03-29 06:12:01.685695", "stderr": "2026-03-29T06:17:01.927+0000 7e2239359640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-03-29T06:17:01.927+0000 7e2239359640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-03-29 06:17:03.464432 | orchestrator | 2026-03-29 06:17:03.464444 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-03-29 06:17:03.464466 | orchestrator | Sunday 29 March 2026 06:17:03 +0000 (0:31:20.281) 0:41:11.189 ********** 2026-03-29 06:17:10.053928 | orchestrator | ok: [testbed-node-0] 2026-03-29 06:17:10.054204 | orchestrator | 2026-03-29 06:17:10.054235 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-03-29 06:17:10.054257 | orchestrator | Sunday 29 March 2026 06:17:05 +0000 (0:00:01.775) 0:41:12.965 ********** 2026-03-29 06:17:10.054275 | orchestrator | ok: [testbed-node-0] 2026-03-29 06:17:10.054292 | orchestrator | 2026-03-29 06:17:10.054310 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-03-29 06:17:10.054330 | orchestrator | Sunday 29 March 2026 06:17:07 +0000 (0:00:01.817) 0:41:14.783 ********** 2026-03-29 06:17:10.054349 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-03-29 06:17:10.054370 | orchestrator | 2026-03-29 06:17:10.054390 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 06:17:10.054409 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 06:17:10.054426 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 06:17:10.054438 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-03-29 06:17:10.054451 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 06:17:10.054465 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-29 06:17:10.054477 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-03-29 06:17:10.054490 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-03-29 06:17:10.054502 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-03-29 06:17:10.054515 | orchestrator | 2026-03-29 06:17:10.054528 | orchestrator | 2026-03-29 06:17:10.054540 | orchestrator | 2026-03-29 06:17:10.054554 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 06:17:10.054567 | orchestrator | Sunday 29 March 2026 06:17:09 +0000 (0:00:02.479) 0:41:17.262 ********** 2026-03-29 06:17:10.054580 | orchestrator | =============================================================================== 2026-03-29 06:17:10.054593 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.28s 2026-03-29 06:17:10.054606 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.62s 2026-03-29 06:17:10.054618 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.84s 2026-03-29 06:17:10.054631 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.29s 2026-03-29 06:17:10.054661 | orchestrator | Set cluster configs ---------------------------------------------------- 10.54s 2026-03-29 06:17:10.054674 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.25s 2026-03-29 06:17:10.054687 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.97s 2026-03-29 06:17:10.054700 | orchestrator | Gather facts ------------------------------------------------------------ 6.38s 2026-03-29 06:17:10.054737 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 5.69s 2026-03-29 06:17:10.054750 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.97s 2026-03-29 06:17:10.054763 | orchestrator | Stop ceph mon ----------------------------------------------------------- 3.74s 2026-03-29 06:17:10.054775 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.49s 2026-03-29 06:17:10.054788 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.47s 2026-03-29 06:17:10.054802 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.36s 2026-03-29 06:17:10.054814 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.19s 2026-03-29 06:17:10.054827 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.11s 2026-03-29 06:17:10.054838 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.11s 2026-03-29 06:17:10.054848 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 2.98s 2026-03-29 06:17:10.054859 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.98s 2026-03-29 06:17:10.054870 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.83s 2026-03-29 06:17:10.582443 | orchestrator | ERROR 2026-03-29 06:17:10.582680 | orchestrator | { 2026-03-29 06:17:10.582722 | orchestrator | "delta": "2:10:18.263904", 2026-03-29 06:17:10.582748 | orchestrator | "end": "2026-03-29 06:17:10.340944", 2026-03-29 06:17:10.582770 | orchestrator | "msg": "non-zero return code", 2026-03-29 06:17:10.582790 | orchestrator | "rc": 2, 2026-03-29 06:17:10.582811 | orchestrator | "start": "2026-03-29 04:06:52.077040" 2026-03-29 06:17:10.582869 | orchestrator | } failure 2026-03-29 06:17:10.816383 | 2026-03-29 06:17:10.816536 | PLAY RECAP 2026-03-29 06:17:10.816621 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-29 06:17:10.816668 | 2026-03-29 06:17:11.092690 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-29 06:17:11.093798 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 06:17:11.848827 | 2026-03-29 06:17:11.849010 | PLAY [Post output play] 2026-03-29 06:17:11.867579 | 2026-03-29 06:17:11.867763 | LOOP [stage-output : Register sources] 2026-03-29 06:17:11.937410 | 2026-03-29 06:17:11.937727 | TASK [stage-output : Check sudo] 2026-03-29 06:17:12.820233 | orchestrator | sudo: a password is required 2026-03-29 06:17:12.977207 | orchestrator | ok: Runtime: 0:00:00.012405 2026-03-29 06:17:12.993066 | 2026-03-29 06:17:12.993265 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-29 06:17:13.031485 | 2026-03-29 06:17:13.031744 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-29 06:17:13.099611 | orchestrator | ok 2026-03-29 06:17:13.108050 | 2026-03-29 06:17:13.108250 | LOOP [stage-output : Ensure target folders exist] 2026-03-29 06:17:13.578140 | orchestrator | ok: "docs" 2026-03-29 06:17:13.578465 | 2026-03-29 06:17:13.841227 | orchestrator | ok: "artifacts" 2026-03-29 06:17:14.122310 | orchestrator | ok: "logs" 2026-03-29 06:17:14.140042 | 2026-03-29 06:17:14.140283 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-29 06:17:14.178467 | 2026-03-29 06:17:14.178728 | TASK [stage-output : Make all log files readable] 2026-03-29 06:17:14.498821 | orchestrator | ok 2026-03-29 06:17:14.507933 | 2026-03-29 06:17:14.508065 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-29 06:17:14.542720 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:14.558267 | 2026-03-29 06:17:14.558431 | TASK [stage-output : Discover log files for compression] 2026-03-29 06:17:14.582952 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:14.595425 | 2026-03-29 06:17:14.595584 | LOOP [stage-output : Archive everything from logs] 2026-03-29 06:17:14.639436 | 2026-03-29 06:17:14.639653 | PLAY [Post cleanup play] 2026-03-29 06:17:14.650481 | 2026-03-29 06:17:14.650608 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 06:17:14.702422 | orchestrator | ok 2026-03-29 06:17:14.711679 | 2026-03-29 06:17:14.711794 | TASK [Set cloud fact (local deployment)] 2026-03-29 06:17:14.747671 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:14.758675 | 2026-03-29 06:17:14.758804 | TASK [Clean the cloud environment] 2026-03-29 06:17:15.417041 | orchestrator | 2026-03-29 06:17:15 - clean up servers 2026-03-29 06:17:16.191787 | orchestrator | 2026-03-29 06:17:16 - testbed-manager 2026-03-29 06:17:16.277938 | orchestrator | 2026-03-29 06:17:16 - testbed-node-0 2026-03-29 06:17:16.363435 | orchestrator | 2026-03-29 06:17:16 - testbed-node-2 2026-03-29 06:17:16.451606 | orchestrator | 2026-03-29 06:17:16 - testbed-node-5 2026-03-29 06:17:16.540448 | orchestrator | 2026-03-29 06:17:16 - testbed-node-1 2026-03-29 06:17:16.635837 | orchestrator | 2026-03-29 06:17:16 - testbed-node-3 2026-03-29 06:17:16.736505 | orchestrator | 2026-03-29 06:17:16 - testbed-node-4 2026-03-29 06:17:16.834556 | orchestrator | 2026-03-29 06:17:16 - clean up keypairs 2026-03-29 06:17:16.856332 | orchestrator | 2026-03-29 06:17:16 - testbed 2026-03-29 06:17:16.885416 | orchestrator | 2026-03-29 06:17:16 - wait for servers to be gone 2026-03-29 06:17:27.718885 | orchestrator | 2026-03-29 06:17:27 - clean up ports 2026-03-29 06:17:27.937230 | orchestrator | 2026-03-29 06:17:27 - 1dd0c98c-6e5e-440b-be1c-852997353c7d 2026-03-29 06:17:28.241514 | orchestrator | 2026-03-29 06:17:28 - 64283d06-eb2c-4c4a-a272-54b74d3719aa 2026-03-29 06:17:28.496701 | orchestrator | 2026-03-29 06:17:28 - 70ee15f9-eea2-4003-8422-2da72d168ead 2026-03-29 06:17:28.712656 | orchestrator | 2026-03-29 06:17:28 - 87019d94-1cf5-4e12-9173-da7b5d6a8407 2026-03-29 06:17:29.120582 | orchestrator | 2026-03-29 06:17:29 - 98404865-9f16-4e21-9c63-cc1d8fc01055 2026-03-29 06:17:29.363942 | orchestrator | 2026-03-29 06:17:29 - bf131a14-f29c-4991-873e-0da66418e4f9 2026-03-29 06:17:29.595344 | orchestrator | 2026-03-29 06:17:29 - e7b0ba07-53d9-483c-8cfc-3fc6550937da 2026-03-29 06:17:29.814362 | orchestrator | 2026-03-29 06:17:29 - clean up volumes 2026-03-29 06:17:29.934631 | orchestrator | 2026-03-29 06:17:29 - testbed-volume-1-node-base 2026-03-29 06:17:29.979145 | orchestrator | 2026-03-29 06:17:29 - testbed-volume-3-node-base 2026-03-29 06:17:30.023147 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-2-node-base 2026-03-29 06:17:30.067461 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-0-node-base 2026-03-29 06:17:30.109482 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-4-node-base 2026-03-29 06:17:30.149733 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-5-node-base 2026-03-29 06:17:30.192850 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-manager-base 2026-03-29 06:17:30.233490 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-5-node-5 2026-03-29 06:17:30.275384 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-2-node-5 2026-03-29 06:17:30.318821 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-3-node-3 2026-03-29 06:17:30.362805 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-0-node-3 2026-03-29 06:17:30.407802 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-1-node-4 2026-03-29 06:17:30.452434 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-6-node-3 2026-03-29 06:17:30.492983 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-4-node-4 2026-03-29 06:17:30.537640 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-7-node-4 2026-03-29 06:17:30.585668 | orchestrator | 2026-03-29 06:17:30 - testbed-volume-8-node-5 2026-03-29 06:17:30.628270 | orchestrator | 2026-03-29 06:17:30 - disconnect routers 2026-03-29 06:17:30.748717 | orchestrator | 2026-03-29 06:17:30 - testbed 2026-03-29 06:17:31.746195 | orchestrator | 2026-03-29 06:17:31 - clean up subnets 2026-03-29 06:17:31.803307 | orchestrator | 2026-03-29 06:17:31 - subnet-testbed-management 2026-03-29 06:17:31.981121 | orchestrator | 2026-03-29 06:17:31 - clean up networks 2026-03-29 06:17:32.180672 | orchestrator | 2026-03-29 06:17:32 - net-testbed-management 2026-03-29 06:17:32.459907 | orchestrator | 2026-03-29 06:17:32 - clean up security groups 2026-03-29 06:17:32.500948 | orchestrator | 2026-03-29 06:17:32 - testbed-node 2026-03-29 06:17:32.623458 | orchestrator | 2026-03-29 06:17:32 - testbed-management 2026-03-29 06:17:32.734607 | orchestrator | 2026-03-29 06:17:32 - clean up floating ips 2026-03-29 06:17:32.776898 | orchestrator | 2026-03-29 06:17:32 - 81.163.192.84 2026-03-29 06:17:33.162896 | orchestrator | 2026-03-29 06:17:33 - clean up routers 2026-03-29 06:17:33.270555 | orchestrator | 2026-03-29 06:17:33 - testbed 2026-03-29 06:17:34.320035 | orchestrator | ok: Runtime: 0:00:19.077553 2026-03-29 06:17:34.324201 | 2026-03-29 06:17:34.324365 | PLAY RECAP 2026-03-29 06:17:34.324489 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-29 06:17:34.324550 | 2026-03-29 06:17:34.459503 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 06:17:34.460559 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 06:17:35.186398 | 2026-03-29 06:17:35.186567 | PLAY [Cleanup play] 2026-03-29 06:17:35.202582 | 2026-03-29 06:17:35.202720 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 06:17:35.262383 | orchestrator | ok 2026-03-29 06:17:35.274323 | 2026-03-29 06:17:35.274504 | TASK [Set cloud fact (local deployment)] 2026-03-29 06:17:35.309648 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:35.326089 | 2026-03-29 06:17:35.326272 | TASK [Clean the cloud environment] 2026-03-29 06:17:36.484193 | orchestrator | 2026-03-29 06:17:36 - clean up servers 2026-03-29 06:17:36.944897 | orchestrator | 2026-03-29 06:17:36 - clean up keypairs 2026-03-29 06:17:36.962388 | orchestrator | 2026-03-29 06:17:36 - wait for servers to be gone 2026-03-29 06:17:37.005363 | orchestrator | 2026-03-29 06:17:37 - clean up ports 2026-03-29 06:17:37.075140 | orchestrator | 2026-03-29 06:17:37 - clean up volumes 2026-03-29 06:17:37.136354 | orchestrator | 2026-03-29 06:17:37 - disconnect routers 2026-03-29 06:17:37.159877 | orchestrator | 2026-03-29 06:17:37 - clean up subnets 2026-03-29 06:17:37.177791 | orchestrator | 2026-03-29 06:17:37 - clean up networks 2026-03-29 06:17:37.302805 | orchestrator | 2026-03-29 06:17:37 - clean up security groups 2026-03-29 06:17:37.341549 | orchestrator | 2026-03-29 06:17:37 - clean up floating ips 2026-03-29 06:17:37.365715 | orchestrator | 2026-03-29 06:17:37 - clean up routers 2026-03-29 06:17:37.865318 | orchestrator | ok: Runtime: 0:00:01.275979 2026-03-29 06:17:37.869086 | 2026-03-29 06:17:37.869273 | PLAY RECAP 2026-03-29 06:17:37.869394 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-29 06:17:37.869455 | 2026-03-29 06:17:38.004879 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 06:17:38.007612 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 06:17:38.802359 | 2026-03-29 06:17:38.802518 | PLAY [Base post-fetch] 2026-03-29 06:17:38.818210 | 2026-03-29 06:17:38.818362 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-29 06:17:38.875728 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:38.887474 | 2026-03-29 06:17:38.887652 | TASK [fetch-output : Set log path for single node] 2026-03-29 06:17:38.930162 | orchestrator | ok 2026-03-29 06:17:38.937521 | 2026-03-29 06:17:38.937663 | LOOP [fetch-output : Ensure local output dirs] 2026-03-29 06:17:39.425985 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/logs" 2026-03-29 06:17:39.715353 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/artifacts" 2026-03-29 06:17:40.020293 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8260414bd6014c3b8bec15592c50df7f/work/docs" 2026-03-29 06:17:40.048298 | 2026-03-29 06:17:40.048597 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-29 06:17:40.997612 | orchestrator | changed: .d..t...... ./ 2026-03-29 06:17:40.998031 | orchestrator | changed: All items complete 2026-03-29 06:17:40.998116 | 2026-03-29 06:17:41.735336 | orchestrator | changed: .d..t...... ./ 2026-03-29 06:17:42.482828 | orchestrator | changed: .d..t...... ./ 2026-03-29 06:17:42.516589 | 2026-03-29 06:17:42.516768 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-29 06:17:42.554207 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:42.556766 | orchestrator | skipping: Conditional result was False 2026-03-29 06:17:42.580256 | 2026-03-29 06:17:42.580363 | PLAY RECAP 2026-03-29 06:17:42.580429 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-29 06:17:42.580463 | 2026-03-29 06:17:42.731285 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 06:17:42.733820 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 06:17:43.474755 | 2026-03-29 06:17:43.474947 | PLAY [Base post] 2026-03-29 06:17:43.489573 | 2026-03-29 06:17:43.489722 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-29 06:17:44.552449 | orchestrator | changed 2026-03-29 06:17:44.563608 | 2026-03-29 06:17:44.563770 | PLAY RECAP 2026-03-29 06:17:44.563873 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-29 06:17:44.563973 | 2026-03-29 06:17:44.683794 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 06:17:44.685982 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-29 06:17:45.528992 | 2026-03-29 06:17:45.529213 | PLAY [Base post-logs] 2026-03-29 06:17:45.540006 | 2026-03-29 06:17:45.540185 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-29 06:17:45.991265 | localhost | changed 2026-03-29 06:17:46.008916 | 2026-03-29 06:17:46.009189 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-29 06:17:46.048422 | localhost | ok 2026-03-29 06:17:46.055205 | 2026-03-29 06:17:46.055385 | TASK [Set zuul-log-path fact] 2026-03-29 06:17:46.077427 | localhost | ok 2026-03-29 06:17:46.092721 | 2026-03-29 06:17:46.092862 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 06:17:46.129937 | localhost | ok 2026-03-29 06:17:46.135235 | 2026-03-29 06:17:46.135401 | TASK [upload-logs : Create log directories] 2026-03-29 06:17:46.662085 | localhost | changed 2026-03-29 06:17:46.668425 | 2026-03-29 06:17:46.668639 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-29 06:17:47.198674 | localhost -> localhost | ok: Runtime: 0:00:00.007323 2026-03-29 06:17:47.207069 | 2026-03-29 06:17:47.207274 | TASK [upload-logs : Upload logs to log server] 2026-03-29 06:17:47.807192 | localhost | Output suppressed because no_log was given 2026-03-29 06:17:47.811333 | 2026-03-29 06:17:47.811518 | LOOP [upload-logs : Compress console log and json output] 2026-03-29 06:17:47.872374 | localhost | skipping: Conditional result was False 2026-03-29 06:17:47.878249 | localhost | skipping: Conditional result was False 2026-03-29 06:17:47.893968 | 2026-03-29 06:17:47.894240 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-29 06:17:47.944915 | localhost | skipping: Conditional result was False 2026-03-29 06:17:47.945567 | 2026-03-29 06:17:47.949286 | localhost | skipping: Conditional result was False 2026-03-29 06:17:47.957301 | 2026-03-29 06:17:47.957512 | LOOP [upload-logs : Upload console log and json output]